id
stringlengths
40
40
source
stringclasses
9 values
title
stringlengths
2
345
clean_text
stringlengths
35
1.63M
raw_text
stringlengths
4
1.63M
url
stringlengths
4
498
overview
stringlengths
0
10k
6fc14a36994fd7f838aa40dfb93e5d6e63a3eff8
wikidoc
Pelvic cavity
Pelvic cavity The pelvic cavity is a body cavity that is bounded by the bones of the pelvis and which primarily contains reproductive organs, the urinary bladder, and the rectum. # Borders The boundaries are as follows: # Greater and lesser pelvis The lesser pelvis (or "true" pelvis" only includes structures inferior to the pelvic brim. For example, the pelvic splanchnic nerves arising at S2-S4 is in the true pelvis, but the femoral nerve from L2-L4 is only in the "false pelvis", or greater pelvis. # Ligaments # Arteries - internal iliac artery - median sacral artery - ovarian artery # Nerves - sacral plexus - splanchnic nerves # Additional images - Articulations of pelvis. Anterior view. - The arteries of the pelvis. - Dissection of side wall of pelvis showing sacral and pudendal plexuses. - Sacral plexus of the right side.
Pelvic cavity Template:Infobox Anatomy The pelvic cavity is a body cavity that is bounded by the bones of the pelvis and which primarily contains reproductive organs, the urinary bladder, and the rectum. # Borders The boundaries are as follows: # Greater and lesser pelvis The lesser pelvis (or "true" pelvis" only includes structures inferior to the pelvic brim. For example, the pelvic splanchnic nerves arising at S2-S4 is in the true pelvis, but the femoral nerve from L2-L4 is only in the "false pelvis", or greater pelvis. # Ligaments # Arteries - internal iliac artery - median sacral artery - ovarian artery # Nerves - sacral plexus - splanchnic nerves # Additional images - - Articulations of pelvis. Anterior view. - The arteries of the pelvis. - Dissection of side wall of pelvis showing sacral and pudendal plexuses. - Sacral plexus of the right side.
https://www.wikidoc.org/index.php/Pelvic_cavity
456013e0b52970173a385e86fceb5d72fde3d35d
wikidoc
Pembrolizumab
Pembrolizumab # Disclaimer WikiDoc MAKES NO GUARANTEE OF VALIDITY. WikiDoc is not a professional health care provider, nor is it a suitable replacement for a licensed healthcare provider. WikiDoc is intended to be an educational tool, not a tool for any form of healthcare delivery. The educational content on WikiDoc drug pages is based upon the FDA package insert, National Library of Medicine content and practice guidelines / consensus statements. WikiDoc does not promote the administration of any medication or device that is not consistent with its labeling. Please read our full disclaimer here. # Overview Pembrolizumab is an monoclonal antibody that is FDA approved for the treatment of patients with unresectable or metastatic melanoma and disease progression following ipilimumab and, if BRAF V400 mutation positive, a BRAF inhibitor. Common adverse reactions include pruritus, rash, constipation, decrease in appetite, diarrhea, nausea, arthralgia, cough, fatigue, erythroderma, adrenal insufficiency, hypophysitis, anemia, hemolytic anemia, pneumonitis. # Adult Indications and Dosage ## FDA-Labeled Indications and Dosage (Adult) - pembrolizumab® is indicated for the treatment of patients with unresectable or metastatic melanoma and disease progression following ipilimumab and, if BRAF V400 mutation positive, a BRAF inhibitor . - This indication is approved under accelerated approval based on tumor response rate and durability of response. An improvement in survival or disease-related symptoms has not yet been established. Continued approval for this indication may be contingent upon verification and description of clinical benefit in the confirmatory trials. - The recommended dose of pembrolizumab is 2 mg/kg administered as an intravenous infusion over 30 minutes every 3 weeks until disease progression or unacceptable toxicity. - Withhold pembrolizumab for any of the following: - Grade 2 pneumonitis - Grade 2 or 3 colitis - Symptomatic hypophysitis - Grade 2 nephritis - Grade 3 hyperthyroidism - Aspartate aminotransferase (AST) or alanine aminotransferase (ALT) greater than 3 and up to 5 times upper limit of normal (ULN) or total bilirubin greater than 1.5 and up to 3 times ULN - Any other severe or Grade 3 treatment-related adverse reaction - Resume pembrolizumab in patients whose adverse reactions recover to Grade 0-1. - Permanently discontinue pembrolizumab for any of the following: - Any life-threatening adverse reaction - Grade 3 or 4 pneumonitis - Grade 3 or 4 nephritis - AST or ALT greater than 5 times ULN or total bilirubin greater than 3 times ULN For patients with liver metastasis who begin treatment with Grade 2 AST or ALT, if AST or ALT increases by greater than or equal to 50% relative to baseline and lasts for at least 1 week - For patients with liver metastasis who begin treatment with Grade 2 AST or ALT, if AST or ALT increases by greater than or equal to 50% relative to baseline and lasts for at least 1 week - Grade 3 or 4 infusion-related reactions - Inability to reduce corticosteroid dose to 10 mg or less of prednisone or equivalent per day within 12 weeks - Persistent Grade 2 or 3 adverse reactions that do not recover to Grade 0-1 within 12 weeks after last dose of pembrolizumab - Any severe or Grade 3 treatment-related adverse reaction that recurs - Add 2.3 mL of Sterile Water for Injection, USP by injecting the water along the walls of the vial and not directly on the lyophilized powder (resulting concentration 25 mg/mL). - Slowly swirl the vial. Allow up to 5 minutes for the bubbles to clear. Do not shake the vial. - Visually inspect the solution for particulate matter and discoloration prior to administration. The solution is clear to slightly opalescent, colorless to slightly yellow. Discard the vial if visible particles are observed. - Dilute pembrolizumab injection (solution) or reconstituted lyophilized powder prior to intravenous administration. - Withdraw the required volume from the vial(s) of pembrolizumab and transfer into an intravenous (IV) bag containing 0.9% Sodium Chloride Injection, USP or 5% Dextrose Injection, USP. Mix diluted solution by gentle inversion. The final concentration of the diluted solution should be between 1 mg/mL to 10 mg/mL. - Discard any unused portion left in the vial. - The product does not contain a preservative. - Store the reconstituted and diluted solution from the pembrolizumab 50 mg vial either: - At room temperature for no more than 6 hours from the time of reconstitution. This includes room temperature storage of reconstituted vials, storage of the infusion solution in the IV bag, and the duration of infusion. - Under refrigeration at 2°C to 8°C (36°F to 46°F) for no more than 24 hours from the time of reconstitution. If refrigerated, allow the diluted solution to come to room temperature prior to administration. - Store the diluted solution from the pembrolizumab 100 mg/4 mL vial either: - At room temperature for no more than 6 hours from the time of dilution. This includes room temperature storage of the infusion solution in the IV bag, and the duration of infusion. - Under refrigeration at 2°C to 8°C (36°F to 46°F) for no more than 24 hours from the time of dilution. If refrigerated, allow the diluted solution to come to room temperature prior to administration. - Do not freeze. - Administer infusion solution intravenously over 30 minutes through an intravenous line containing a sterile, non-pyrogenic, low-protein binding 0.2 micron to 5 micron in-line or add-on filter. - Do not co-administer other drugs through the same infusion line. ## Off-Label Use and Dosage (Adult) ### Guideline-Supported Use There is limited information regarding Off-Label Guideline-Supported Use of pembrolizumab in adult patients. ### Non–Guideline-Supported Use There is limited information regarding Off-Label Non–Guideline-Supported Use of pembrolizumab in adult patients. # Pediatric Indications and Dosage ## FDA-Labeled Indications and Dosage (Pediatric) There is limited information regarding FDA-Labeled Use of pembrolizumab in pediatric patients. ## Off-Label Use and Dosage (Pediatric) ### Guideline-Supported Use There is limited information regarding Off-Label Guideline-Supported Use of pembrolizumab in pediatric patients. ### Non–Guideline-Supported Use There is limited information regarding Off-Label Non–Guideline-Supported Use of pembrolizumab in pediatric patients. # Contraindications - None. # Warnings - pneumonitis occurred in 12 (2.9%) of 411 melanoma patients, including Grade 2 or 3 cases in 8 (1.9%) and 1 (0.2%) patients, respectively, receiving pembrolizumab in Trial 1. The median time to development of pneumonitis was 5 months (range 0.3 weeks to 9.9 months). The median duration was 4.9 months (range 1 week to 14.4 months). Five of eight patients with Grade 2 and the one patient with Grade 3 pneumonitis required initial treatment with high-dose systemic corticosteroids (greater than or equal to 40 mg prednisone or equivalent per day) followed by a corticosteroid taper. The median initial dose of high-dose corticosteroid treatment was 63.4 mg/day of prednisone or equivalent with a median duration of treatment of 3 days (range 1 to 34) followed by a corticosteroid taper. pneumonitis led to discontinuation of pembrolizumab in 3 (0.7%) patients. pneumonitis completely resolved in seven of the nine patients with Grade 2-3 pneumonitis. - Monitor patients for signs and symptoms of pneumonitis. Evaluate patients with suspected pneumonitis with radiographic imaging and administer corticosteroids for Grade 2 or greater pneumonitis. Withhold pembrolizumab for moderate (Grade 2) pneumonitis, and permanently discontinue pembrolizumab for severe (Grade 3) or life-threatening (Grade 4) pneumonitis . - colitis (including microscopic colitis) occurred in 4 (1%) of 411 patients, including Grade 2 or 3 cases in 1 (0.2%) and 2 (0.5%) patients, respectively, receiving pembrolizumab in Trial 1. The median time to onset of colitis was 6.5 months (range 2.3 to 9.8). The median duration was 2.6 months (range 0.6 weeks to 3.6 months). All three patients with Grade 2 or 3 colitis were treated with high-dose corticosteroids (greater than or equal to 40 mg prednisone or equivalent per day) with a median initial dose of 70 mg/day of prednisone or equivalent; the median duration of initial treatment was 7 days (range 4 to 41), followed by a corticosteroid taper. One patient (0.2%) required permanent discontinuation of pembrolizumab due to colitis. All four patients with colitis experienced complete resolution of the event. - Monitor patients for signs and symptoms of colitis. Administer corticosteroids for Grade 2 or greater colitis. Withhold pembrolizumab for moderate (Grade 2) or severe (Grade 3) colitis, and permanently discontinue pembrolizumab for life-threatening (Grade 4) colitis - hepatitis (including autoimmune hepatitis) occurred in 2 (0.5%) of 411 patients, including a Grade 4 case in 1 (0.2%) patient, receiving pembrolizumab in Trial 1. The time to onset was 22 days for the case of Grade 4 hepatitis which lASTed 1.1 months. The patient with Grade 4 hepatitis permanently discontinued pembrolizumab and was treated with high-dose (greater than or equal to 40 mg prednisone or equivalent per day) systemic corticosteroids followed by a corticosteroid taper. Both patients with hepatitis experienced complete resolution of the event. - Monitor patients for changes in liver function. Administer corticosteroids for Grade 2 or greater hepatitis and, based on severity of liver enzyme elevations, withhold or discontinue pembrolizumab . - hypophysitis occurred in 2 (0.5%) of 411 patients, consisting of one Grade 2 and one Grade 4 case (0.2% each), in patients receiving pembrolizumab in Trial 1. The time to onset was 1.7 months for the patient with Grade 4 hypophysitis and 1.3 months for the patient with Grade 2 hypophysitis. Both patients were treated with high-dose (greater than or equal to 40 mg prednisone or equivalent per day) corticosteroids followed by a corticosteroid taper and remained on a physiologic replacement dose. - Monitor for signs and symptoms of hypophysitis. Administer corticosteroids for Grade 2 or greater hypophysitis. Withhold pembrolizumab for moderate (Grade 2) hypophysitis, withhold or discontinue pembrolizumab for severe (Grade 3) hypophysitis, and permanently discontinue pembrolizumab for life-threatening (Grade 4) hypophysitis . - nephritis occurred in 3 (0.7%) patients, consisting of one case of Grade 2 autoimmune nephritis (0.2%) and two cases of interstitial nephritis with renal failure (0.5%), one Grade 3 and one Grade 4. The time to onset of autoimmune nephritis was 11.6 months after the first dose of pembrolizumab (5 months after the lAST dose) and lASTed 3.2 months; this patient did not have a biopsy. Acute interstitial nephritis was confirmed by renal biopsy in two patients with Grades 3-4 renal failure. All three patients fully recovered renal function with treatment with high-dose corticosteroids (greater than or equal to 40 mg prednisone or equivalent per day) followed by a corticosteroid taper. - Monitor patients for changes in renal function. Administer corticosteroids for Grade 2 or greater nephritis. Withhold pembrolizumab for moderate (Grade 2) nephritis, and permanently discontinue pembrolizumab for severe (Grade 3), or life-threatening (Grade 4) nephritis . - hyperthyroidism occurred in 5 (1.2%) of 411 patients, including Grade 2 or 3 cases in 2 (0.5%) and 1 (0.2%) patients, respectively, receiving pembrolizumab in Trial 1. The median time to onset was 1.5 months (range 0.5 to 2.1). The median duration was 2.8 months (range 0.9 to 6.1). One of two patients with Grade 2 and the one patient with Grade 3 hyperthyroidism required initial treatment with high-dose corticosteroids (greater than or equal to 40 mg prednisone or equivalent per day) followed by a corticosteroid taper. One patient (0.2%) required permanent discontinuation of pembrolizumab due to hyperthyroidism. All five patients with hyperthyroidism experienced complete resolution of the event. - Hypothyroidism occurred in 34 (8.3%) of 411 patients, including a Grade 3 case in 1 (0.2%) patient, receiving pembrolizumab in Trial 1. The median time to onset of Hypothyroidism was 3.5 months (range 0.7 weeks to 19 months). All but two of the patients with Hypothyroidism were treated with long-term thyroid hormone replacement therapy. The other two patients only required short-term thyroid hormone replacement therapy. No patient received corticosteroids or discontinued pembrolizumab for management of Hypothyroidism. - Thyroid disorders can occur at any time during treatment. Monitor patients for changes in thyroid function (at the start of treatment, periodically during treatment, and as indicated based on clinical evaluation) and for clinical signs and symptoms of thyroid disorders. - Administer corticosteroids for Grade 3 or greater hyperthyroidism, withhold pembrolizumab for severe (Grade 3) hyperthyroidism, and permanently discontinue pembrolizumab for life-threatening (Grade 4) hyperthyroidism. Isolated Hypothyroidism may be managed with replacement therapy without treatment interruption and without corticosteroids . - Other clinically important immune-mediated adverse reactions can occur. - The following clinically significant, immune-mediated adverse reactions occurred in less than 1% of patients treated with pembrolizumab in Trial 1: exfoliative dermatitis, uveitis, arthritis, myositis, pancreatitis, hemolytic anemia, partial seizures arising in a patient with inflammatory foci in brain parenchyma, and adrenal insufficiency. - Across clinical studies with pembrolizumab in approximately 2000 patients, the following additional clinically significant, immune-mediated adverse reactions were reported in less than 1% of patients: myasthenic syndrome, optic neuritis, and rhabdomyolysis. - For suspected immune-mediated adverse reactions, ensure adequate evaluation to confirm etiology or exclude other causes. Based on the severity of the adverse reaction, withhold pembrolizumab and administer corticosteroids. Upon improvement to Grade 1 or less, initiate corticosteroid taper and continue to taper over at least 1 month. Restart pembrolizumab if the adverse reaction remains at Grade 1 or less. Permanently discontinue pembrolizumab for any severe or Grade 3 immune-mediated adverse reaction that recurs and for any life-threatening immune-mediated adverse reaction . - Based on its mechanism of action, pembrolizumab may cause fetal harm when administered to a pregnant woman. Animal models link the PD-1/PD-L1 signaling pathway with maintenance of pregnancy through induction of maternal immune tolerance to fetal tissue. If this drug is used during pregnancy, or if the patient becomes pregnant while taking this drug, apprise the patient of the potential hazard to a fetus. Advise females of reproductive potential to use highly effective contraception during treatment with pembrolizumab and for 4 months after the last dose of pembrolizumab # Adverse Reactions ## Clinical Trials Experience - The following adverse reactions are discussed in greater detail in other sections of the labeling. - Immune-mediated pneumonitis - Immune-mediated colitis - Immune-mediated hepatitis - Immune-mediated hypophysitis - Renal failure and immune-mediated nephritis . - Immune-mediated hyperthyroidism and Hypothyroidism . - Immune-mediated adverse reactions - Because clinical trials are conducted under widely varying conditions, adverse reaction rates observed in the clinical trials of a drug cannot be directly compared to rates in the clinical trials of another drug and may not reflect the rates observed in practice. - The data described in the WARNINGS section reflect exposure to pembrolizumab in Trial 1, an uncontrolled, open-label, multiple cohort trial in which 411 patients with unresectable or metastatic melanoma received pembrolizumab at either 2 mg/kg every 3 weeks or 10 mg/kg every 2 or 3 weeks. The median duration of exposure to pembrolizumab was 6.2 months (range 1 day to 24.6 months) with a median of 10 doses (range 1 to 51). The study population characteristics were: median age of 61 years (range 18 to 94), 39% age 65 years or older, 60% male, 97% white, 73% with M1c disease, 8% with brain metastases, 35% with elevated LDH, 54% with prior exposure to ipilimumab, and 47% with two or more prior systemic therapies for advanced or metastatic disease. - pembrolizumab was discontinued for adverse reactions in 9% of the 411 patients. Adverse reactions, reported in at least two patients, that led to discontinuation of pembrolizumab were: pneumonitis, renal failure, and pain. Serious adverse reactions occurred in 36% of patients receiving pembrolizumab . The most frequent serious adverse drug reactions reported in 2% or more of patients in Trial 1 were renal failure, dyspnea, pneumonia, and cellulitis. - Table 1 presents adverse reactions identified from analyses of the 89 patients with unresectable or metastatic melanoma who received pembrolizumab 2 mg/kg every three weeks in one cohort of Trial 1. Patients had documented disease progression following treatment with ipilimumab and, if BRAF V400 mutation positive, a BRAF inhibitor. This cohort of Trial 1 excluded patients with severe immune-related toxicity related to ipilimumab, defined as any Grade 4 toxicity requiring treatment with corticosteroids or Grade 3 toxicity requiring corticosteroid treatment (greater than 10 mg/day prednisone or equivalent dose) for greater than 12 weeks; a medical condition that required systemic corticosteroids or other immunosuppressive medication; a history of pneumonitis or interstitial lung disease; or any active infection requiring therapy, including HIV or hepatitis B or C. Of the 89 patients in this cohort, the median age was 59 years (range 18 to 88), 33% were age 65 years or older, 53% were male, 98% were white, 44% had an elevated LDH, 84% had Stage M1c disease, 8% had brain metastases, and 70% received two or more prior therapies for advanced or metastatic disease. The median duration of exposure to pembrolizumab was 6.2 months (range 1 day to 15.3 months) with a median of nine doses (range 1 to 23). Fifty-one percent of patients were exposed to pembrolizumab for greater than 6 months and 21% for greater than 1 year. - pembrolizumab was discontinued for adverse reactions in 6% of the 89 patients. The most common adverse reactions (reported in at least 20% of patients) were fatigue, cough, nausea, pruritus, rash, decreased appetite, constipation, arthralgia, and diarrhea. - As with all therapeutic proteins, there is the potential for immunogenicity. Because trough levels of pembrolizumab interfere with the electrochemiluminescent (ECL) assay results, a subset analysis was performed in the patients with a concentration of pembrolizumab below the drug tolerance level of the anti-product antibody assay. In this analysis, none of the 97 patients who were treated with 2 mg/kg every 3 weeks tested positive for treatment-emergent anti-pembrolizumab antibodies. - The detection of antibody formation is highly dependent on the sensitivity and specificity of the assay. Additionally, the observed incidence of antibody (including neutralizing antibody) positivity in an assay may be influenced by several factors including assay methodology, sample handling, timing of sample collection, concomitant medications, and underlying disease. For these reasons, comparison of incidence of antibodies to pembrolizumab with the incidences of antibodies to other products may be misleading. ## Postmarketing Experience There is limited information regarding Postmarketing Experience of pembrolizumab in the drug label. # Drug Interactions - No formal pharmacokinetic drug interaction studies have been conducted with pembrolizumab . # Use in Specific Populations ### Pregnancy Pregnancy Category (FDA): D - Based on its mechanism of action, pembrolizumab may cause fetal harm when administered to a pregnant woman. Animal models link the PD-1/PD-L1 signaling pathway with maintenance of pregnancy through induction of maternal immune tolerance to fetal tissue. If this drug is used during pregnancy, or if the patient becomes pregnant while taking this drug, apprise the patient of the potential hazard to a fetus. - Animal reproduction studies have not been conducted with pembrolizumab to evaluate its effect on reproduction and fetal development, but an assessment of the effects on reproduction was provided. A central function of the PD-1/PD-L1 pathway is to preserve pregnancy by maintaining maternal immune tolerance to the fetus. Blockade of PD-L1 signaling has been shown in murine models of pregnancy to disrupt tolerance to the fetus and to result in an increase in fetal loss; therefore, potential risks of administering pembrolizumab during pregnancy include increased rates of abortion or stillbirth. As reported in the literature, there were no malformations related to the blockade of PD-1 signaling in the offspring of these animals; however, immune-mediated disorders occurred in PD-1 knockout mice. - Human IgG4 (immunoglobulins) are known to cross the placenta; therefore, pembrolizumab has the potential to be transmitted from the mother to the developing fetus. Based on its mechanism of action, fetal exposure to pembrolizumab may increase the risk of developing immune-mediated disorders or of altering the normal immune response. Pregnancy Category (AUS): There is no Australian Drug Evaluation Committee (ADEC) guidance on usage of pembrolizumab in women who are pregnant. ### Labor and Delivery There is no FDA guidance on use of pembrolizumab during labor and delivery. ### Nursing Mothers - It is not known whether pembrolizumab is excreted in human milk. No studies have been conducted to assess the impact of pembrolizumab on milk production or its presence in breast milk. Because many drugs are excreted in human milk, instruct women to discontinue nursing during treatment with pembrolizumab ### Pediatric Use - Safety and effectiveness of pembrolizumab have not been established in pediatric patients. ### Geriatic Use - Of the 411 patients treated with pembrolizumab , 39% were 65 years and over. No overall differences in safety or efficacy were reported between elderly patients and younger patients. ### Gender There is no FDA guidance on the use of pembrolizumab with respect to specific gender populations. ### Race There is no FDA guidance on the use of pembrolizumab with respect to specific racial populations. ### Renal Impairment - Based on a population pharmacokinetic analysis, no dose adjustment is needed for patients with renal impairment ### Hepatic Impairment - Based on a population pharmacokinetic analysis, no dose adjustment is needed for patients with mild hepatic impairment . pembrolizumab has not been studied in patients with moderate (TB greater than 1.5 to 3 times ULN and any AST) or severe (TB greater than 3 times ULN and any AST) hepatic impairment ### Females of Reproductive Potential and Males - Based on its mechanism of action, pembrolizumab may cause fetal harm when administered to a pregnant woman . Advise females of reproductive potential to use highly effective contraception during treatment with pembrolizumab and for at least 4 months following the last dose of pembrolizumab. ### Immunocompromised Patients There is no FDA guidance one the use of pembrolizumab in patients who are immunocompromised. # Administration and Monitoring ### Administration - Intravenous ### Monitoring There is limited information regarding Monitoring of pembrolizumab in the drug label. # IV Compatibility There is limited information regarding IV Compatibility of pembrolizumab in the drug label. # Overdosage - There is no information on overdosage with pembrolizumab . # Pharmacology ## Mechanism of Action - Binding of the PD-1 ligands, PD-L1 and PD-L2, to the PD-1 receptor found on T cells, inhibits T cell proliferation and cytokine production. Upregulation of PD-1 ligands occurs in some tumors and signaling through this pathway can contribute to inhibition of active T-cell immune surveillance of tumors. Pembrolizumab is a monoclonal antibody that binds to the PD-1 receptor and blocks its interaction with PD-L1 and PD-L2, releasing PD-1 pathway-mediated inhibition of the immune response, including the anti-tumor immune response. In syngeneic mouse tumor models, blocking PD-1 activity resulted in decreased tumor growth. ## Structure - Pembrolizumab is a humanized monoclonal antibody that blocks the interaction between PD-1 and its ligands, PD-L1 and PD-L2. Pembrolizumab is an IgG4 kappa immunoglobulin with an approximate molecular weight of 149 kDa. - pembrolizumab for injection is a sterile, preservative-free, white to off-white lyophilized powder in single-use vials. Each vial is reconstituted and diluted for intravenous infusion. Each 2 mL of reconstituted solution contains 50 mg of pembrolizumab and is formulated in L-histidine (3.1 mg), polysorbate 80 (0.4 mg), and sucrose (140 mg). May contain hydrochloric acid/sodium hydroxide to adjust pH to 5.5. pembrolizumab injection is a sterile, preservative-free, clear to slightly opalescent, colorless to slightly yellow solution that requires dilution for intravenous infusion. Each vial contains 100 mg of pembrolizumab in 4 mL of solution. Each 1 mL of solution contains 25 mg of pembrolizumab and is formulated in: L-histidine (1.55 mg), polysorbate 80 (0.2 mg), sucrose (70 mg), and Water for Injection, USP. ## Pharmacodynamics There is limited information regarding Pharmacodynamics of pembrolizumab in the drug label. ## Pharmacokinetics There is limited information regarding Pembrolizumab Pharmacokinetics in the drug label. ## Nonclinical Toxicology There is limited information regarding Pembrolizumab Nonclinical Toxicology in the drug label. # Clinical Studies There is limited information regarding Pembrolizumab Clinical Studies in the drug label. # How Supplied There is limited information regarding Pembrolizumab How Supplied in the drug label. ## Storage There is limited information regarding Pembrolizumab Storage in the drug label. # Images ## Drug Images ## Package and Label Display Panel # Patient Counseling Information There is limited information regarding Pembrolizumab Patient Counseling Information in the drug label. # Precautions with Alcohol Alcohol-Pembrolizumab interaction has not been established. Talk to your doctor about the effects of taking alcohol with this medication. # Brand Names There is limited information regarding Pembrolizumab Brand Names in the drug label. # Look-Alike Drug Names There is limited information regarding Pembrolizumab Look-Alike Drug Names in the drug label. # Drug Shortage Status # Price
Pembrolizumab Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]; Associate Editor(s)-in-Chief: Aparna Vuppala, M.B.B.S. [2] # Disclaimer WikiDoc MAKES NO GUARANTEE OF VALIDITY. WikiDoc is not a professional health care provider, nor is it a suitable replacement for a licensed healthcare provider. WikiDoc is intended to be an educational tool, not a tool for any form of healthcare delivery. The educational content on WikiDoc drug pages is based upon the FDA package insert, National Library of Medicine content and practice guidelines / consensus statements. WikiDoc does not promote the administration of any medication or device that is not consistent with its labeling. Please read our full disclaimer here. # Overview Pembrolizumab is an monoclonal antibody that is FDA approved for the treatment of patients with unresectable or metastatic melanoma and disease progression following ipilimumab and, if BRAF V400 mutation positive, a BRAF inhibitor. Common adverse reactions include pruritus, rash, constipation, decrease in appetite, diarrhea, nausea, arthralgia, cough, fatigue, erythroderma, adrenal insufficiency, hypophysitis, anemia, hemolytic anemia, pneumonitis. # Adult Indications and Dosage ## FDA-Labeled Indications and Dosage (Adult) - pembrolizumab® is indicated for the treatment of patients with unresectable or metastatic melanoma and disease progression following ipilimumab and, if BRAF V400 mutation positive, a BRAF inhibitor . - This indication is approved under accelerated approval based on tumor response rate and durability of response. An improvement in survival or disease-related symptoms has not yet been established. Continued approval for this indication may be contingent upon verification and description of clinical benefit in the confirmatory trials. - The recommended dose of pembrolizumab is 2 mg/kg administered as an intravenous infusion over 30 minutes every 3 weeks until disease progression or unacceptable toxicity. - Withhold pembrolizumab for any of the following: - Grade 2 pneumonitis - Grade 2 or 3 colitis - Symptomatic hypophysitis - Grade 2 nephritis - Grade 3 hyperthyroidism - Aspartate aminotransferase (AST) or alanine aminotransferase (ALT) greater than 3 and up to 5 times upper limit of normal (ULN) or total bilirubin greater than 1.5 and up to 3 times ULN - Any other severe or Grade 3 treatment-related adverse reaction - Resume pembrolizumab in patients whose adverse reactions recover to Grade 0-1. - Permanently discontinue pembrolizumab for any of the following: - Any life-threatening adverse reaction - Grade 3 or 4 pneumonitis - Grade 3 or 4 nephritis - AST or ALT greater than 5 times ULN or total bilirubin greater than 3 times ULN For patients with liver metastasis who begin treatment with Grade 2 AST or ALT, if AST or ALT increases by greater than or equal to 50% relative to baseline and lasts for at least 1 week - For patients with liver metastasis who begin treatment with Grade 2 AST or ALT, if AST or ALT increases by greater than or equal to 50% relative to baseline and lasts for at least 1 week - Grade 3 or 4 infusion-related reactions - Inability to reduce corticosteroid dose to 10 mg or less of prednisone or equivalent per day within 12 weeks - Persistent Grade 2 or 3 adverse reactions that do not recover to Grade 0-1 within 12 weeks after last dose of pembrolizumab - Any severe or Grade 3 treatment-related adverse reaction that recurs - Add 2.3 mL of Sterile Water for Injection, USP by injecting the water along the walls of the vial and not directly on the lyophilized powder (resulting concentration 25 mg/mL). - Slowly swirl the vial. Allow up to 5 minutes for the bubbles to clear. Do not shake the vial. - Visually inspect the solution for particulate matter and discoloration prior to administration. The solution is clear to slightly opalescent, colorless to slightly yellow. Discard the vial if visible particles are observed. - Dilute pembrolizumab injection (solution) or reconstituted lyophilized powder prior to intravenous administration. - Withdraw the required volume from the vial(s) of pembrolizumab and transfer into an intravenous (IV) bag containing 0.9% Sodium Chloride Injection, USP or 5% Dextrose Injection, USP. Mix diluted solution by gentle inversion. The final concentration of the diluted solution should be between 1 mg/mL to 10 mg/mL. - Discard any unused portion left in the vial. - The product does not contain a preservative. - Store the reconstituted and diluted solution from the pembrolizumab 50 mg vial either: - At room temperature for no more than 6 hours from the time of reconstitution. This includes room temperature storage of reconstituted vials, storage of the infusion solution in the IV bag, and the duration of infusion. - Under refrigeration at 2°C to 8°C (36°F to 46°F) for no more than 24 hours from the time of reconstitution. If refrigerated, allow the diluted solution to come to room temperature prior to administration. - Store the diluted solution from the pembrolizumab 100 mg/4 mL vial either: - At room temperature for no more than 6 hours from the time of dilution. This includes room temperature storage of the infusion solution in the IV bag, and the duration of infusion. - Under refrigeration at 2°C to 8°C (36°F to 46°F) for no more than 24 hours from the time of dilution. If refrigerated, allow the diluted solution to come to room temperature prior to administration. - Do not freeze. - Administer infusion solution intravenously over 30 minutes through an intravenous line containing a sterile, non-pyrogenic, low-protein binding 0.2 micron to 5 micron in-line or add-on filter. - Do not co-administer other drugs through the same infusion line. ## Off-Label Use and Dosage (Adult) ### Guideline-Supported Use There is limited information regarding Off-Label Guideline-Supported Use of pembrolizumab in adult patients. ### Non–Guideline-Supported Use There is limited information regarding Off-Label Non–Guideline-Supported Use of pembrolizumab in adult patients. # Pediatric Indications and Dosage ## FDA-Labeled Indications and Dosage (Pediatric) There is limited information regarding FDA-Labeled Use of pembrolizumab in pediatric patients. ## Off-Label Use and Dosage (Pediatric) ### Guideline-Supported Use There is limited information regarding Off-Label Guideline-Supported Use of pembrolizumab in pediatric patients. ### Non–Guideline-Supported Use There is limited information regarding Off-Label Non–Guideline-Supported Use of pembrolizumab in pediatric patients. # Contraindications - None. # Warnings - pneumonitis occurred in 12 (2.9%) of 411 melanoma patients, including Grade 2 or 3 cases in 8 (1.9%) and 1 (0.2%) patients, respectively, receiving pembrolizumab in Trial 1. The median time to development of pneumonitis was 5 months (range 0.3 weeks to 9.9 months). The median duration was 4.9 months (range 1 week to 14.4 months). Five of eight patients with Grade 2 and the one patient with Grade 3 pneumonitis required initial treatment with high-dose systemic corticosteroids (greater than or equal to 40 mg prednisone or equivalent per day) followed by a corticosteroid taper. The median initial dose of high-dose corticosteroid treatment was 63.4 mg/day of prednisone or equivalent with a median duration of treatment of 3 days (range 1 to 34) followed by a corticosteroid taper. pneumonitis led to discontinuation of pembrolizumab in 3 (0.7%) patients. pneumonitis completely resolved in seven of the nine patients with Grade 2-3 pneumonitis. - Monitor patients for signs and symptoms of pneumonitis. Evaluate patients with suspected pneumonitis with radiographic imaging and administer corticosteroids for Grade 2 or greater pneumonitis. Withhold pembrolizumab for moderate (Grade 2) pneumonitis, and permanently discontinue pembrolizumab for severe (Grade 3) or life-threatening (Grade 4) pneumonitis . - colitis (including microscopic colitis) occurred in 4 (1%) of 411 patients, including Grade 2 or 3 cases in 1 (0.2%) and 2 (0.5%) patients, respectively, receiving pembrolizumab in Trial 1. The median time to onset of colitis was 6.5 months (range 2.3 to 9.8). The median duration was 2.6 months (range 0.6 weeks to 3.6 months). All three patients with Grade 2 or 3 colitis were treated with high-dose corticosteroids (greater than or equal to 40 mg prednisone or equivalent per day) with a median initial dose of 70 mg/day of prednisone or equivalent; the median duration of initial treatment was 7 days (range 4 to 41), followed by a corticosteroid taper. One patient (0.2%) required permanent discontinuation of pembrolizumab due to colitis. All four patients with colitis experienced complete resolution of the event. - Monitor patients for signs and symptoms of colitis. Administer corticosteroids for Grade 2 or greater colitis. Withhold pembrolizumab for moderate (Grade 2) or severe (Grade 3) colitis, and permanently discontinue pembrolizumab for life-threatening (Grade 4) colitis - hepatitis (including autoimmune hepatitis) occurred in 2 (0.5%) of 411 patients, including a Grade 4 case in 1 (0.2%) patient, receiving pembrolizumab in Trial 1. The time to onset was 22 days for the case of Grade 4 hepatitis which lASTed 1.1 months. The patient with Grade 4 hepatitis permanently discontinued pembrolizumab and was treated with high-dose (greater than or equal to 40 mg prednisone or equivalent per day) systemic corticosteroids followed by a corticosteroid taper. Both patients with hepatitis experienced complete resolution of the event. - Monitor patients for changes in liver function. Administer corticosteroids for Grade 2 or greater hepatitis and, based on severity of liver enzyme elevations, withhold or discontinue pembrolizumab . - hypophysitis occurred in 2 (0.5%) of 411 patients, consisting of one Grade 2 and one Grade 4 case (0.2% each), in patients receiving pembrolizumab in Trial 1. The time to onset was 1.7 months for the patient with Grade 4 hypophysitis and 1.3 months for the patient with Grade 2 hypophysitis. Both patients were treated with high-dose (greater than or equal to 40 mg prednisone or equivalent per day) corticosteroids followed by a corticosteroid taper and remained on a physiologic replacement dose. - Monitor for signs and symptoms of hypophysitis. Administer corticosteroids for Grade 2 or greater hypophysitis. Withhold pembrolizumab for moderate (Grade 2) hypophysitis, withhold or discontinue pembrolizumab for severe (Grade 3) hypophysitis, and permanently discontinue pembrolizumab for life-threatening (Grade 4) hypophysitis . - nephritis occurred in 3 (0.7%) patients, consisting of one case of Grade 2 autoimmune nephritis (0.2%) and two cases of interstitial nephritis with renal failure (0.5%), one Grade 3 and one Grade 4. The time to onset of autoimmune nephritis was 11.6 months after the first dose of pembrolizumab (5 months after the lAST dose) and lASTed 3.2 months; this patient did not have a biopsy. Acute interstitial nephritis was confirmed by renal biopsy in two patients with Grades 3-4 renal failure. All three patients fully recovered renal function with treatment with high-dose corticosteroids (greater than or equal to 40 mg prednisone or equivalent per day) followed by a corticosteroid taper. - Monitor patients for changes in renal function. Administer corticosteroids for Grade 2 or greater nephritis. Withhold pembrolizumab for moderate (Grade 2) nephritis, and permanently discontinue pembrolizumab for severe (Grade 3), or life-threatening (Grade 4) nephritis . - hyperthyroidism occurred in 5 (1.2%) of 411 patients, including Grade 2 or 3 cases in 2 (0.5%) and 1 (0.2%) patients, respectively, receiving pembrolizumab in Trial 1. The median time to onset was 1.5 months (range 0.5 to 2.1). The median duration was 2.8 months (range 0.9 to 6.1). One of two patients with Grade 2 and the one patient with Grade 3 hyperthyroidism required initial treatment with high-dose corticosteroids (greater than or equal to 40 mg prednisone or equivalent per day) followed by a corticosteroid taper. One patient (0.2%) required permanent discontinuation of pembrolizumab due to hyperthyroidism. All five patients with hyperthyroidism experienced complete resolution of the event. - Hypothyroidism occurred in 34 (8.3%) of 411 patients, including a Grade 3 case in 1 (0.2%) patient, receiving pembrolizumab in Trial 1. The median time to onset of Hypothyroidism was 3.5 months (range 0.7 weeks to 19 months). All but two of the patients with Hypothyroidism were treated with long-term thyroid hormone replacement therapy. The other two patients only required short-term thyroid hormone replacement therapy. No patient received corticosteroids or discontinued pembrolizumab for management of Hypothyroidism. - Thyroid disorders can occur at any time during treatment. Monitor patients for changes in thyroid function (at the start of treatment, periodically during treatment, and as indicated based on clinical evaluation) and for clinical signs and symptoms of thyroid disorders. - Administer corticosteroids for Grade 3 or greater hyperthyroidism, withhold pembrolizumab for severe (Grade 3) hyperthyroidism, and permanently discontinue pembrolizumab for life-threatening (Grade 4) hyperthyroidism. Isolated Hypothyroidism may be managed with replacement therapy without treatment interruption and without corticosteroids . - Other clinically important immune-mediated adverse reactions can occur. - The following clinically significant, immune-mediated adverse reactions occurred in less than 1% of patients treated with pembrolizumab in Trial 1: exfoliative dermatitis, uveitis, arthritis, myositis, pancreatitis, hemolytic anemia, partial seizures arising in a patient with inflammatory foci in brain parenchyma, and adrenal insufficiency. - Across clinical studies with pembrolizumab in approximately 2000 patients, the following additional clinically significant, immune-mediated adverse reactions were reported in less than 1% of patients: myasthenic syndrome, optic neuritis, and rhabdomyolysis. - For suspected immune-mediated adverse reactions, ensure adequate evaluation to confirm etiology or exclude other causes. Based on the severity of the adverse reaction, withhold pembrolizumab and administer corticosteroids. Upon improvement to Grade 1 or less, initiate corticosteroid taper and continue to taper over at least 1 month. Restart pembrolizumab if the adverse reaction remains at Grade 1 or less. Permanently discontinue pembrolizumab for any severe or Grade 3 immune-mediated adverse reaction that recurs and for any life-threatening immune-mediated adverse reaction . - Based on its mechanism of action, pembrolizumab may cause fetal harm when administered to a pregnant woman. Animal models link the PD-1/PD-L1 signaling pathway with maintenance of pregnancy through induction of maternal immune tolerance to fetal tissue. If this drug is used during pregnancy, or if the patient becomes pregnant while taking this drug, apprise the patient of the potential hazard to a fetus. Advise females of reproductive potential to use highly effective contraception during treatment with pembrolizumab and for 4 months after the last dose of pembrolizumab # Adverse Reactions ## Clinical Trials Experience - The following adverse reactions are discussed in greater detail in other sections of the labeling. - Immune-mediated pneumonitis - Immune-mediated colitis - Immune-mediated hepatitis - Immune-mediated hypophysitis - Renal failure and immune-mediated nephritis . - Immune-mediated hyperthyroidism and Hypothyroidism . - Immune-mediated adverse reactions - Because clinical trials are conducted under widely varying conditions, adverse reaction rates observed in the clinical trials of a drug cannot be directly compared to rates in the clinical trials of another drug and may not reflect the rates observed in practice. - The data described in the WARNINGS section reflect exposure to pembrolizumab in Trial 1, an uncontrolled, open-label, multiple cohort trial in which 411 patients with unresectable or metastatic melanoma received pembrolizumab at either 2 mg/kg every 3 weeks or 10 mg/kg every 2 or 3 weeks. The median duration of exposure to pembrolizumab was 6.2 months (range 1 day to 24.6 months) with a median of 10 doses (range 1 to 51). The study population characteristics were: median age of 61 years (range 18 to 94), 39% age 65 years or older, 60% male, 97% white, 73% with M1c disease, 8% with brain metastases, 35% with elevated LDH, 54% with prior exposure to ipilimumab, and 47% with two or more prior systemic therapies for advanced or metastatic disease. - pembrolizumab was discontinued for adverse reactions in 9% of the 411 patients. Adverse reactions, reported in at least two patients, that led to discontinuation of pembrolizumab were: pneumonitis, renal failure, and pain. Serious adverse reactions occurred in 36% of patients receiving pembrolizumab . The most frequent serious adverse drug reactions reported in 2% or more of patients in Trial 1 were renal failure, dyspnea, pneumonia, and cellulitis. - Table 1 presents adverse reactions identified from analyses of the 89 patients with unresectable or metastatic melanoma who received pembrolizumab 2 mg/kg every three weeks in one cohort of Trial 1. Patients had documented disease progression following treatment with ipilimumab and, if BRAF V400 mutation positive, a BRAF inhibitor. This cohort of Trial 1 excluded patients with severe immune-related toxicity related to ipilimumab, defined as any Grade 4 toxicity requiring treatment with corticosteroids or Grade 3 toxicity requiring corticosteroid treatment (greater than 10 mg/day prednisone or equivalent dose) for greater than 12 weeks; a medical condition that required systemic corticosteroids or other immunosuppressive medication; a history of pneumonitis or interstitial lung disease; or any active infection requiring therapy, including HIV or hepatitis B or C. Of the 89 patients in this cohort, the median age was 59 years (range 18 to 88), 33% were age 65 years or older, 53% were male, 98% were white, 44% had an elevated LDH, 84% had Stage M1c disease, 8% had brain metastases, and 70% received two or more prior therapies for advanced or metastatic disease. The median duration of exposure to pembrolizumab was 6.2 months (range 1 day to 15.3 months) with a median of nine doses (range 1 to 23). Fifty-one percent of patients were exposed to pembrolizumab for greater than 6 months and 21% for greater than 1 year. - pembrolizumab was discontinued for adverse reactions in 6% of the 89 patients. The most common adverse reactions (reported in at least 20% of patients) were fatigue, cough, nausea, pruritus, rash, decreased appetite, constipation, arthralgia, and diarrhea. - As with all therapeutic proteins, there is the potential for immunogenicity. Because trough levels of pembrolizumab interfere with the electrochemiluminescent (ECL) assay results, a subset analysis was performed in the patients with a concentration of pembrolizumab below the drug tolerance level of the anti-product antibody assay. In this analysis, none of the 97 patients who were treated with 2 mg/kg every 3 weeks tested positive for treatment-emergent anti-pembrolizumab antibodies. - The detection of antibody formation is highly dependent on the sensitivity and specificity of the assay. Additionally, the observed incidence of antibody (including neutralizing antibody) positivity in an assay may be influenced by several factors including assay methodology, sample handling, timing of sample collection, concomitant medications, and underlying disease. For these reasons, comparison of incidence of antibodies to pembrolizumab with the incidences of antibodies to other products may be misleading. ## Postmarketing Experience There is limited information regarding Postmarketing Experience of pembrolizumab in the drug label. # Drug Interactions - No formal pharmacokinetic drug interaction studies have been conducted with pembrolizumab . # Use in Specific Populations ### Pregnancy Pregnancy Category (FDA): D - Based on its mechanism of action, pembrolizumab may cause fetal harm when administered to a pregnant woman. Animal models link the PD-1/PD-L1 signaling pathway with maintenance of pregnancy through induction of maternal immune tolerance to fetal tissue. If this drug is used during pregnancy, or if the patient becomes pregnant while taking this drug, apprise the patient of the potential hazard to a fetus. - Animal reproduction studies have not been conducted with pembrolizumab to evaluate its effect on reproduction and fetal development, but an assessment of the effects on reproduction was provided. A central function of the PD-1/PD-L1 pathway is to preserve pregnancy by maintaining maternal immune tolerance to the fetus. Blockade of PD-L1 signaling has been shown in murine models of pregnancy to disrupt tolerance to the fetus and to result in an increase in fetal loss; therefore, potential risks of administering pembrolizumab during pregnancy include increased rates of abortion or stillbirth. As reported in the literature, there were no malformations related to the blockade of PD-1 signaling in the offspring of these animals; however, immune-mediated disorders occurred in PD-1 knockout mice. - Human IgG4 (immunoglobulins) are known to cross the placenta; therefore, pembrolizumab has the potential to be transmitted from the mother to the developing fetus. Based on its mechanism of action, fetal exposure to pembrolizumab may increase the risk of developing immune-mediated disorders or of altering the normal immune response. Pregnancy Category (AUS): There is no Australian Drug Evaluation Committee (ADEC) guidance on usage of pembrolizumab in women who are pregnant. ### Labor and Delivery There is no FDA guidance on use of pembrolizumab during labor and delivery. ### Nursing Mothers - It is not known whether pembrolizumab is excreted in human milk. No studies have been conducted to assess the impact of pembrolizumab on milk production or its presence in breast milk. Because many drugs are excreted in human milk, instruct women to discontinue nursing during treatment with pembrolizumab ### Pediatric Use - Safety and effectiveness of pembrolizumab have not been established in pediatric patients. ### Geriatic Use - Of the 411 patients treated with pembrolizumab , 39% were 65 years and over. No overall differences in safety or efficacy were reported between elderly patients and younger patients. ### Gender There is no FDA guidance on the use of pembrolizumab with respect to specific gender populations. ### Race There is no FDA guidance on the use of pembrolizumab with respect to specific racial populations. ### Renal Impairment - Based on a population pharmacokinetic analysis, no dose adjustment is needed for patients with renal impairment ### Hepatic Impairment - Based on a population pharmacokinetic analysis, no dose adjustment is needed for patients with mild hepatic impairment [total bilirubin (TB) less than or equal to ULN and AST greater than ULN or TB greater than 1 to 1.5 times ULN and any AST]. pembrolizumab has not been studied in patients with moderate (TB greater than 1.5 to 3 times ULN and any AST) or severe (TB greater than 3 times ULN and any AST) hepatic impairment ### Females of Reproductive Potential and Males - Based on its mechanism of action, pembrolizumab may cause fetal harm when administered to a pregnant woman . Advise females of reproductive potential to use highly effective contraception during treatment with pembrolizumab and for at least 4 months following the last dose of pembrolizumab. ### Immunocompromised Patients There is no FDA guidance one the use of pembrolizumab in patients who are immunocompromised. # Administration and Monitoring ### Administration - Intravenous ### Monitoring There is limited information regarding Monitoring of pembrolizumab in the drug label. # IV Compatibility There is limited information regarding IV Compatibility of pembrolizumab in the drug label. # Overdosage - There is no information on overdosage with pembrolizumab . # Pharmacology ## Mechanism of Action - Binding of the PD-1 ligands, PD-L1 and PD-L2, to the PD-1 receptor found on T cells, inhibits T cell proliferation and cytokine production. Upregulation of PD-1 ligands occurs in some tumors and signaling through this pathway can contribute to inhibition of active T-cell immune surveillance of tumors. Pembrolizumab is a monoclonal antibody that binds to the PD-1 receptor and blocks its interaction with PD-L1 and PD-L2, releasing PD-1 pathway-mediated inhibition of the immune response, including the anti-tumor immune response. In syngeneic mouse tumor models, blocking PD-1 activity resulted in decreased tumor growth. ## Structure - Pembrolizumab is a humanized monoclonal antibody that blocks the interaction between PD-1 and its ligands, PD-L1 and PD-L2. Pembrolizumab is an IgG4 kappa immunoglobulin with an approximate molecular weight of 149 kDa. - pembrolizumab for injection is a sterile, preservative-free, white to off-white lyophilized powder in single-use vials. Each vial is reconstituted and diluted for intravenous infusion. Each 2 mL of reconstituted solution contains 50 mg of pembrolizumab and is formulated in L-histidine (3.1 mg), polysorbate 80 (0.4 mg), and sucrose (140 mg). May contain hydrochloric acid/sodium hydroxide to adjust pH to 5.5. pembrolizumab injection is a sterile, preservative-free, clear to slightly opalescent, colorless to slightly yellow solution that requires dilution for intravenous infusion. Each vial contains 100 mg of pembrolizumab in 4 mL of solution. Each 1 mL of solution contains 25 mg of pembrolizumab and is formulated in: L-histidine (1.55 mg), polysorbate 80 (0.2 mg), sucrose (70 mg), and Water for Injection, USP. ## Pharmacodynamics There is limited information regarding Pharmacodynamics of pembrolizumab in the drug label. ## Pharmacokinetics There is limited information regarding Pembrolizumab Pharmacokinetics in the drug label. ## Nonclinical Toxicology There is limited information regarding Pembrolizumab Nonclinical Toxicology in the drug label. # Clinical Studies There is limited information regarding Pembrolizumab Clinical Studies in the drug label. # How Supplied There is limited information regarding Pembrolizumab How Supplied in the drug label. ## Storage There is limited information regarding Pembrolizumab Storage in the drug label. # Images ## Drug Images ## Package and Label Display Panel # Patient Counseling Information There is limited information regarding Pembrolizumab Patient Counseling Information in the drug label. # Precautions with Alcohol Alcohol-Pembrolizumab interaction has not been established. Talk to your doctor about the effects of taking alcohol with this medication. # Brand Names There is limited information regarding Pembrolizumab Brand Names in the drug label. # Look-Alike Drug Names There is limited information regarding Pembrolizumab Look-Alike Drug Names in the drug label. # Drug Shortage Status # Price
https://www.wikidoc.org/index.php/Pembrolizumab
7631fb19ee517a19092abc05686f0588ac47b781
wikidoc
Pendant group
Pendant group A pendant group or side group is a group of molecules attached to a backbone chain of a long molecule. Usually, this molecule would be a polymer. For example, the phenyl groups are the pendant groups on a polystyrene chain. Large, bulky pendant groups such as adamantyl usually raise the glass transition temperature (Tg) of a polymer by preventing the chains from sliding past each other easily. Short alkyl pendant groups may lower the Tg by a lubricant effect.
Pendant group A pendant group or side group is a group of molecules attached to a backbone chain of a long molecule. Usually, this molecule would be a polymer.[1] For example, the phenyl groups are the pendant groups on a polystyrene chain. Large, bulky pendant groups such as adamantyl usually raise the glass transition temperature (Tg) of a polymer by preventing the chains from sliding past each other easily. Short alkyl pendant groups may lower the Tg by a lubricant effect.
https://www.wikidoc.org/index.php/Pendant_group
157cc02bd9ee942f68913f4b09b96fd4e656371c
wikidoc
Penicilliosis
Penicilliosis # Overview Penicilliosis is an infection caused by Penicillium marneffei. Once considered rare, its occurrence has increased due to AIDS. It is now the third most common opportunistic infection (after extrapulmonary tuberculosis and cryptococcosis) in HIV-positive individuals within the endemic area of Southeast Asia. # Epidemiology There is a high incidence of penicilliosis in AIDS patients in SE Asia; 10% of patients in Hong Kong get penicillosis as a AIDS-related illness. Cases of P. marneffei human infections (penicillosis) have also been reported in HIV-positive patients in Australia, Europe, Japan, the UK and the U.S.. All the patients had visited Southeast Asia previously. Discovered in bamboo rats (Rhizomys) in Vietnam, it is associated with these rats and the tropical Southeast Asia area. Penicillium marneffei is endemic in Burma (Myanmar), Cambodia, Southern China, Indonesia, Laos, Malaysia, Thailand and Vietnam. Although both the immunocompetent and the immunocompromised can be infected, it is extremely rare to find systemic infections in HIV-negative patients. The incidence of P. marneffei is increasing as HIV spreads throughout Asia. An increase in global travel and migration means it will be of increased importance as an infection in AIDS sufferers. Penicillium marneffei has been found in bamboo rat faeces, liver, lungs and spleen. It has been suggested that these animals are a reservoir for the fungus. It is not clear whether the rats are affected by P. marneffei or are merely asymptomatic carriers of the disease. One study of 550 AIDS patients showed that the incidence was higher during the rainy season, which is when the rats breed but also when conditions are more favorable for production of fungal spores (conidia) that can become airborne and be inhaled by susceptible individuals. Another study could not establish contact with bamboo rats as a risk factor, but exposure to the soil was the critical risk factor. However, soil samples failed to yield much of the fungus. It is not known whether people get the disease by eating infected rats, or by inhaling fungi from their faeces. There is an example of an HIV-positive physician who was infected while attending a course on tropical microbiology. He did not handle the organism, though students in the same laboratory did. It is presumed he contracted the infection by inhaling aerosol containing P. marneffei conidia. This shows that airborne infections are possible. # Symptoms The most common symptoms are fever, skin lesions, anemia, generalized lymphadenopathy, and hepatomegaly. # Diagnosis Diagnosis is usually made by identification of the fungi from clinical specimens. Biopsies of skin lesions, lymph nodes, and bone marrow demonstrate the presence of organisms on histopathology. # Treatment Penicillium marneffei demonstrates in vitro susceptibility to multiple antifungal agents including ketoconazole, itraconazole, miconazole, flucytosine, and amphotericin B. Without treatment patients have a poor prognosis. ## Antimicrobial Regimen - 1. Mild disease - Preferred regimen: Itraconazole 200 mg PO bid for 8 to 12 weeks without amphotericin B induction therapy - Alternative regimen: Voriconazole 400 mg PO bid on day 1 THEN 200 mg PO bid for 12 weeks - 2. Moderate-severe disease - Preferred regimen: Liposomal Amphotericin B 3-5 mg/kg/day IV qd OR Amphotericin B lipid complex 5 mg/kg/day IV qd for 2 weeks THEN Itraconazole 200 mg PO bid for 10 weeks - Alternative regimen: Voriconazole 6 mg/kg IV q12h on day 1 THEN 4 mg/kg q12h for at least 3 days THEN Voriconazole 200 mg PO bid for a total of 12 weeks - 3. Maintenance therapy - Preferred regimen Itraconazole 200 mg PO qd - Alternative regimen: Voriconazole 200 mg PO bid - Note: Voriconazole and Itraconazole use require serum levels to be monitored to ensure adequate absorption.
Penicilliosis For patient information click here Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1] # Overview Penicilliosis is an infection caused by Penicillium marneffei. Once considered rare, its occurrence has increased due to AIDS. It is now the third most common opportunistic infection (after extrapulmonary tuberculosis and cryptococcosis) in HIV-positive individuals within the endemic area of Southeast Asia. # Epidemiology There is a high incidence of penicilliosis in AIDS patients in SE Asia; 10% of patients in Hong Kong get penicillosis as a AIDS-related illness. Cases of P. marneffei human infections (penicillosis) have also been reported in HIV-positive patients in Australia, Europe, Japan, the UK and the U.S.. All the patients had visited Southeast Asia previously. Discovered in bamboo rats (Rhizomys) in Vietnam, it is associated with these rats and the tropical Southeast Asia area. Penicillium marneffei is endemic in Burma (Myanmar), Cambodia, Southern China, Indonesia, Laos, Malaysia, Thailand and Vietnam. Although both the immunocompetent and the immunocompromised can be infected, it is extremely rare to find systemic infections in HIV-negative patients. The incidence of P. marneffei is increasing as HIV spreads throughout Asia. An increase in global travel and migration means it will be of increased importance as an infection in AIDS sufferers. Penicillium marneffei has been found in bamboo rat faeces, liver, lungs and spleen. It has been suggested that these animals are a reservoir for the fungus. It is not clear whether the rats are affected by P. marneffei or are merely asymptomatic carriers of the disease. One study of 550 AIDS patients showed that the incidence was higher during the rainy season, which is when the rats breed but also when conditions are more favorable for production of fungal spores (conidia) that can become airborne and be inhaled by susceptible individuals. Another study could not establish contact with bamboo rats as a risk factor, but exposure to the soil was the critical risk factor. However, soil samples failed to yield much of the fungus. It is not known whether people get the disease by eating infected rats, or by inhaling fungi from their faeces. There is an example of an HIV-positive physician who was infected while attending a course on tropical microbiology. He did not handle the organism, though students in the same laboratory did. It is presumed he contracted the infection by inhaling aerosol containing P. marneffei conidia. This shows that airborne infections are possible. # Symptoms The most common symptoms are fever, skin lesions, anemia, generalized lymphadenopathy, and hepatomegaly. # Diagnosis Diagnosis is usually made by identification of the fungi from clinical specimens. Biopsies of skin lesions, lymph nodes, and bone marrow demonstrate the presence of organisms on histopathology. # Treatment Penicillium marneffei demonstrates in vitro susceptibility to multiple antifungal agents including ketoconazole, itraconazole, miconazole, flucytosine, and amphotericin B. Without treatment patients have a poor prognosis. ## Antimicrobial Regimen - 1. Mild disease - Preferred regimen: Itraconazole 200 mg PO bid for 8 to 12 weeks without amphotericin B induction therapy[1] - Alternative regimen: Voriconazole 400 mg PO bid on day 1 THEN 200 mg PO bid for 12 weeks[2] - 2. Moderate-severe disease - Preferred regimen: Liposomal Amphotericin B 3-5 mg/kg/day IV qd OR Amphotericin B lipid complex 5 mg/kg/day IV qd for 2 weeks THEN Itraconazole 200 mg PO bid for 10 weeks[3] - Alternative regimen: Voriconazole 6 mg/kg IV q12h on day 1 THEN 4 mg/kg q12h for at least 3 days THEN Voriconazole 200 mg PO bid for a total of 12 weeks[2] - 3. Maintenance therapy[4] - Preferred regimen Itraconazole 200 mg PO qd - Alternative regimen: Voriconazole 200 mg PO bid - Note: Voriconazole and Itraconazole use require serum levels to be monitored to ensure adequate absorption.
https://www.wikidoc.org/index.php/Penicilliosis
aeb33e54c03d0a1db718002e3d67420182a3ba3f
wikidoc
Penis removal
Penis removal In ancient civilizations, removal of the human penis was sometimes used as a means of demonstrating superiority: armies were sometimes known to sever the penises of their enemies to count the dead, as well as for trophies, although usually only the foreskins were taken. The practice of castration (removal of the testicles) sometimes also involves the removal of all or part of the penis, generally with a tube inserted to keep the urethra open for urination. Castration has been used to create a class of servants or slaves (and especially harem-keepers) called eunuchs (Greek Ευνούχοι) in many different places and eras. In the modern era, removal of the human penis is very rare (with some exceptions listed below), and references to removal of the penis are almost always symbolic. Castration is not so rare, and is performed as a last-ditch method of treatment of androgen sensitive prostate cancer. # The missing penis in Egyptian myth Osiris was killed by his brother Set, torn to pieces, with the penis disposed of in the Nile. Osiris's wife, Isis, with the assistance of Thoth, was able to return Osiris to life, but was unable to recover the penis, so she replaced it with an artificial penis made of gold. Through it, she conceived Horus. # Human penis removal in medicine and psychology Some men have penile amputations, known as penectomies, for medical reasons. Cancer, for example, sometimes necessitates removal of all or part of the penis. In some instances, botched childhood circumcisions have also resulted in full or partial penectomies. Genital surgical procedures for transwomen (transgendered or transsexual women) undergoing sex reassignment surgery, do not usually involve the complete removal of the penis; part or all of the glans is usually kept and reshaped as a clitoris, and the skin of the penile shaft may also be inverted to form the vagina. When procedures such as this are not possible, other procedures such as colovaginoplasty are used which do involve the removal of the penis. Issues related to the removal of the penis appear in psychology, for example in the condition known as castration anxiety. Others, who associate the organ with rape and male dominance and aggression, may consciously or subconsciously see the organ (their own or those of others) as a weapon and express a hatred for it, potentially desiring to see it violently removed. Some men have undergone penectomies as a voluntary body modification, but professional opinion is divided regarding the desire for penile amputation as a pathology, thus including it as part of a body dysmorphic disorder. Voluntary subincision, removal of the glans penis, and bifurcation of the penis are related topics. # Involuntary penis removal (assault) There have been incidents in which men have been assaulted, usually by their sexual partners, by having their penises severed. Lorena Bobbitt, for example, was popularly known for cutting off the penis of her husband, John Wayne Bobbitt, out of rage after he allegedly raped her, though he claimed it was for revenge when she discovered his infidelity. Bobbitt's penis was successfully reattached, and he later had a brief career in pornographic movies. This was not the first modern case, however. On 18 May 1936, Sada Abe (also known as Abe Sada) strangled her lover (believed to be at his request, he wanted to die while having sex) Kichizo Ishida (Ishida Kichizo) and cut off his penis, placed it in her kimono and carried it around with her for days before eventually turning it over to the police. She spent a very brief time in jail, and was granted amnesty in 1940. The penis was last seen at a department store exhibition in 1949. This episode was the basis of the film In the Realm of the Senses. Other forms of penis-related violence have also been recorded. In July 2000, in Harrisburg, Pennsylvania, a 17-year-old girl superglued her boyfriend's erect penis to his abdomen allegedly to punish him for infidelity. The boyfriend required emergency medical attention but not removal of his penis. # Symbolism and ramifications of involuntary penis removal Mutilation or forcible removal of the penis has special symbolic significance. As a symbol of male sexuality, fertility, masculinity, and, some feel, male aggression, the removal of the penis may be inspired by a desire to emasculate, and sometimes results in the emasculation of, the victim. Another motive, particularly in cases of spousal assault, is obviously sexual. # Penis Removal in Urban Legend There is a common urban legend pertaining to inadvertent removal of the penis in connection with the use of psychedelic drugs. The story begins with a teenage boy who has never tried drugs before. He hears from his friends that certain drugs heighten sexual excitement. While masturbating under the influence of the drug, he becomes hungry while hallucinating. He sees his erect penis, but perceives it as a hot dog or sausage. He begins eating his penis. The story usually ends with the boy either dying or being found by a family member and taken to a hospital. The story may have been meant to scare children from using drugs. See Andreas W below, the nearest documented case to this urban legend. # Documented cases The following are documented cases of men having their penises severed due to accident, spousal jealousy or self infliction (intentional or not): - The penis of Napoleon was reportedly severed at his autopsy, and purloined: it was some years later sold to a urologist for $40,000. - Dr. W.C. Minor, a contributor to the Oxford English Dictionary who suffered from schizophrenia, performed an autopenectomy in 1902 to pay for imaginary sins. - Grigori Rasputin's penis was severed in the assassination that ended his life on December 16, 1916 (O.S.): it was reported rescued, kept in a wooden box and much cherished by his daughter, Maria. It has reportedly been on display in various locations. - The first documented case of a completely successful penis replantation, restoring full function, was performed at Massachusetts General Hospital by a team led by Dr. Hugh H. Young II, with fellow urologist Dr. John F.S. Daly and plastic surgeons Dr. Benjamin E. Cohen and Dr. James W. May. The case is documented in the February 1977 issue of the American Society of Plastic Surgeons journal, Plastic and Reconstructive Surgery. - In 1966, six-month old David Reimer's penis was destroyed during a botched circumcision using an electrocautery device. He was re-assigned as a girl with tragic consequences. As a teenager, he underwent genital reconstructive surgery to restore his male organ. Years later, David committed suicide. - In 1993, Lorena Bobbitt cut off the penis of her husband, John Wayne Bobbitt with a kitchen knife. It was surgically re-attached, and he subsequently became a porn star. She was found not liable and was sentenced to 45 days hospitalization. - In March 1996, Ms. Tran Nhu Tran, a Vietnamese immigrant in Australia attempted to sever her husband's penis with a pair of scissors. She was charged with malicious wounding, but the charges were dropped based on reconciliation with her husband. - On July 1, 1997, Ms. Kim Phuong Tran (Kim Tran #1), a Vietnamese immigrant in British Columbia, severed her husband's penis after he had told her he was in love with another woman. He had not been discreet about his mistress. Kim Phoung Tran kept telling her husband not to have the mistress, and to please not leave her. He told Kim Phoung Tran that he needed to be left alone so he could think. He then went to sleep. While he was asleep she cut off his penis, and immediately flushed it down the toilet. His penis could not be recovered. Ms. Tran was sentenced to a two-year conditional stay-at-home sentence with community service. Many men's rights groups in Canada were outraged at the lightness of her sentence. - Earl Zea was prosecuted for filing a police report in 1997 that his penis was removed in an assault while asleep, only later admitting that it had been a self-inflicted move to deter a gay male stalker named Ronnie Fountain. - On December 11, 1997, a California resident Alan Hall was admitted to NorthBay Medical Center after having his penis severed. Hall claimed his penis was severed by an attacker named 'Brenda,' in a revenge attack because Mr. Hall had killed Denise Denofrio in July 1983. Later, Hall admitted he had removed his own penis while intoxicated, expecting that it would easily be reattached by surgeons. - In March 2001, in the town of Rotenburg, central Germany, cannibal Armin Meiwes, cut off and flambéed a man's penis, with his consent, and the two men ate it together. The other man, Bernd Jürgen Brandes was then killed by Meiwes, also with his consent. The song "Mein Teil" by Rammstein was inspired by the case. - In January 2002, In Russia, Pavel Morozov, a player of Spartak football for disabled people, was brutally murdered by his friend's girlfriend because he did not want to have sex with her. The friend invited Pavel over to his house to drink vodka with him and his girlfriend. The girlfriend became interested in Pavel the more drunk she got. She made advances towards Pavel but as he did not reciprocate them she became upset and started hitting him and screaming. Pavels friend came over to see what was going on. He might have started hitting Pavel as well. Pavel ended up unconscious on the floor. The girlfriend then unzipped Pavel's pants and cut off his penis. She then stabbed him in the chest. Pavels body was then thrown out in the street. The other two continued drinking. They were arrested the next morning. - In 2003, Alfonse Mumbo, 38, a Kenyan villager, cut off his penis and testicles in order to punish his wife for adultery. - In 2003, a German student known just as "Andreas W", from Halle cut off his own penis and tongue with a pair of garden shears while under the influence of the deleriant drug datura. Neither organ was re-attached successfully. - In 2004, in Kassel, Germany, a 50 year old woman severed the penis of her Ghanaian ex-husband but died as a result of wounds inflicted by the same knife. The man's organ was later retrieved from the same room in which she died, though it is unknown as to whether it was re-attached or not. Fortean Times later reported that the court was told that the man had severed his own penis before attacking his ex-wife. - In October 2004, Dr. Naum Ciomu chopped patient Nelu Radonescu's penis into small pieces in a fit of anger during routine surgery for a testicular malformation. He was ultimately found guilty of grievous bodily harm, fined and received a 1 year suspended jail sentence for the attack. The victim ultimately had reconstructive surgery using tissue from his arm. - On November 2004, Manit Srithammathan cut off two teenage boys' penises and threw them in a canal. When the police questioned Srithammathan, he said he had cut off and disposed of their penises because the boys refused to confess to stealing $1,250 from his ATM account after they were shown videotape evidence of their theft. - In February 2005, Ms. Kim Tran (Kim Tran #2), a Vietnamese immigrant in Alaska, severed her boyfriend's penis with a kitchen knife, after tying him to the windowsill. The severed organ was flushed down the toilet but retrieved and successfully reattached. Ms. Tran was convicted on charges of serious assault with a weapon, but charges of tampering with evidence and sexual assault were dropped. - In February 2005, Spanish surgeons reconstructed the penises of two Kenyan boys whose organs were cut off by witch doctors making a potion supposed to cure HIV/AIDS. - On July 23 2005, Delmy Ruiz, 49, was found guilty of aggravated assault after she had severed Rene Aramando Nuñez' penis with a knife. Ruiz said he had abused her earlier, but it was believed that she was really just jealous because he was seeing someone else. She lured him over to the house to talk about documents concerning the house that they owned together. He fell asleep while at the house. That is when she cut off his penis. The jury had been shown graphic photos of Nuñez' wounded crotch where more than 80% of his penis was completely removed save for a small stump. The penis was never recovered as it had been removed from the scene by her dog. Ruiz was sentenced to eight years in prison and fined $10,000. - On 20 September 2005, the first successful penis transplant was begun in a military hospital in Guangzhou, China. A man at 44 sustained an injury that severed his penis at an accident. Despite atrophy of blood vessels and nerves after a protracted period of time had elapsed (exact length not given), the arteries, veins, nerves and the corpora spongiosa were successfully matched. After seven hours' surgery, the penis regained its function and even managed to attain erection. The extent to which the penis' function was restored and occurrence of rejection or infection remain to be seen. - On March 15, 2006, Polish-American immigrant Jakub Fik, distraught over problems with a girlfriend, went on a vandalism spree; when confronted by Chicago police, he severed his own penis and threw it at the officers. He was taken into custody and sent into surgery. - In Bahrain, an Indian housemaid attacked her husband and severed his penis because of his alleged infidelity. She then threw the penis out their apartment window and into the street. - In the 1990s a man featured on the Jerry Springer talk-show had desired to become a woman and so severed his own penis and hid it from his wife. - On April 22 2007, a man cut off his penis with a knife in a packed London restaurant.. - In 2007, Li Gengbao's penis was cut off by his wife when she found out he was cheating on her. After hearing Li Gengbao plead for his penis back, the wife threw it out the window where the neighbor's dog ate it.
Penis removal Editor-in-Chief: Joel Gelman, M.D. [1], Director of the Center for Reconstructive Urology and Associate Clinical Professor in the Department of Urology at the University of California, Irvine In ancient civilizations, removal of the human penis was sometimes used as a means of demonstrating superiority: armies were sometimes known to sever the penises of their enemies to count the dead, as well as for trophies, although usually only the foreskins were taken[1]. The practice of castration (removal of the testicles) sometimes also involves the removal of all or part of the penis, generally with a tube inserted to keep the urethra open for urination. Castration has been used to create a class of servants or slaves (and especially harem-keepers) called eunuchs (Greek Ευνούχοι) in many different places and eras. In the modern era, removal of the human penis is very rare (with some exceptions listed below), and references to removal of the penis are almost always symbolic. Castration is not so rare, and is performed as a last-ditch method of treatment of androgen sensitive prostate cancer[2][3][4]. # The missing penis in Egyptian myth Template:Expand Osiris was killed by his brother Set, torn to pieces, with the penis disposed of in the Nile. Osiris's wife, Isis, with the assistance of Thoth, was able to return Osiris to life, but was unable to recover the penis, so she replaced it with an artificial penis made of gold. Through it, she conceived Horus. # Human penis removal in medicine and psychology Some men have penile amputations, known as penectomies, for medical reasons. Cancer, for example, sometimes necessitates removal of all or part of the penis. In some instances, botched childhood circumcisions have also resulted in full or partial penectomies. Genital surgical procedures for transwomen (transgendered or transsexual women) undergoing sex reassignment surgery, do not usually involve the complete removal of the penis; part or all of the glans is usually kept and reshaped as a clitoris, and the skin of the penile shaft may also be inverted to form the vagina. When procedures such as this are not possible, other procedures such as colovaginoplasty are used which do involve the removal of the penis. Issues related to the removal of the penis appear in psychology, for example in the condition known as castration anxiety. Others, who associate the organ with rape and male dominance and aggression, may consciously or subconsciously see the organ (their own or those of others) as a weapon and express a hatred for it, potentially desiring to see it violently removed. Some men have undergone penectomies as a voluntary body modification, but professional opinion is divided [citation needed] regarding the desire for penile amputation as a pathology, thus including it as part of a body dysmorphic disorder. Voluntary subincision, removal of the glans penis, and bifurcation of the penis are related topics. # Involuntary penis removal (assault) There have been incidents in which men have been assaulted, usually by their sexual partners, by having their penises severed. Lorena Bobbitt, for example, was popularly known for cutting off the penis of her husband, John Wayne Bobbitt, out of rage after he allegedly raped her, though he claimed it was for revenge when she discovered his infidelity. Bobbitt's penis was successfully reattached, and he later had a brief career in pornographic movies. This was not the first modern case, however. On 18 May 1936, Sada Abe (also known as Abe Sada) strangled her lover (believed to be at his request, he wanted to die while having sex) Kichizo Ishida (Ishida Kichizo) and cut off his penis, placed it in her kimono and carried it around with her for days before eventually turning it over to the police. She spent a very brief time in jail, and was granted amnesty in 1940. The penis was last seen at a department store exhibition in 1949. This episode was the basis of the film In the Realm of the Senses. Other forms of penis-related violence have also been recorded. In July 2000, in Harrisburg, Pennsylvania, a 17-year-old girl superglued her boyfriend's erect penis to his abdomen allegedly to punish him for infidelity. The boyfriend required emergency medical attention but not removal of his penis. # Symbolism and ramifications of involuntary penis removal Mutilation or forcible removal of the penis has special symbolic significance. As a symbol of male sexuality, fertility, masculinity, and, some feel, male aggression, the removal of the penis may be inspired by a desire to emasculate, and sometimes results in the emasculation of, the victim. Another motive, particularly in cases of spousal assault, is obviously sexual. # Penis Removal in Urban Legend There is a common urban legend pertaining to inadvertent removal of the penis in connection with the use of psychedelic drugs. The story begins with a teenage boy who has never tried drugs before. He hears from his friends that certain drugs heighten sexual excitement. While masturbating under the influence of the drug, he becomes hungry while hallucinating. He sees his erect penis, but perceives it as a hot dog or sausage. He begins eating his penis. The story usually ends with the boy either dying or being found by a family member and taken to a hospital. The story may have been meant to scare children from using drugs. See Andreas W below, the nearest documented case to this urban legend. # Documented cases The following are documented cases of men having their penises severed due to accident, spousal jealousy or self infliction (intentional or not): - The penis of Napoleon was reportedly severed at his autopsy, and purloined: it was some years later sold to a urologist for $40,000.[5] - Dr. W.C. Minor, a contributor to the Oxford English Dictionary who suffered from schizophrenia, performed an autopenectomy in 1902 to pay for imaginary sins. - Grigori Rasputin's penis was severed in the assassination that ended his life on December 16, 1916 (O.S.): it was reported rescued, kept in a wooden box and much cherished by his daughter, Maria. It has reportedly been on display in various locations. - The first documented case of a completely successful penis replantation, restoring full function, was performed at Massachusetts General Hospital by a team led by Dr. Hugh H. Young II, with fellow urologist Dr. John F.S. Daly and plastic surgeons Dr. Benjamin E. Cohen and Dr. James W. May. The case is documented in the February 1977 issue of the American Society of Plastic Surgeons journal, Plastic and Reconstructive Surgery. - In 1966, six-month old David Reimer's penis was destroyed during a botched circumcision using an electrocautery device. He was re-assigned as a girl with tragic consequences. As a teenager, he underwent genital reconstructive surgery to restore his male organ. Years later, David committed suicide.[6] - In 1993, Lorena Bobbitt cut off the penis of her husband, John Wayne Bobbitt with a kitchen knife. It was surgically re-attached, and he subsequently became a porn star. She was found not liable and was sentenced to 45 days hospitalization. - In March 1996, Ms. Tran Nhu Tran, a Vietnamese immigrant in Australia attempted to sever her husband's penis with a pair of scissors. She was charged with malicious wounding, but the charges were dropped based on reconciliation with her husband. - On July 1, 1997, Ms. Kim Phuong Tran (Kim Tran #1), a Vietnamese immigrant in British Columbia, severed her husband's penis after he had told her he was in love with another woman. He had not been discreet about his mistress. Kim Phoung Tran kept telling her husband not to have the mistress, and to please not leave her. He told Kim Phoung Tran that he needed to be left alone so he could think. He then went to sleep. While he was asleep she cut off his penis, and immediately flushed it down the toilet. His penis could not be recovered. Ms. Tran was sentenced to a two-year conditional stay-at-home sentence with community service. Many men's rights groups in Canada were outraged at the lightness of her sentence. - Earl Zea was prosecuted for filing a police report in 1997 that his penis was removed in an assault while asleep, only later admitting that it had been a self-inflicted move to deter a gay male stalker named Ronnie Fountain.[7] - On December 11, 1997, a California resident Alan Hall was admitted to NorthBay Medical Center after having his penis severed. Hall claimed his penis was severed by an attacker named 'Brenda,' in a revenge attack because Mr. Hall had killed Denise Denofrio in July 1983. Later, Hall admitted he had removed his own penis while intoxicated, expecting that it would easily be reattached by surgeons. - In March 2001, in the town of Rotenburg, central Germany, cannibal Armin Meiwes, cut off and flambéed a man's penis, with his consent, and the two men ate it together. The other man, Bernd Jürgen Brandes was then killed by Meiwes, also with his consent. The song "Mein Teil" by Rammstein was inspired by the case[8]. - In January 2002, In Russia, Pavel Morozov, a player of Spartak football for disabled people, was brutally murdered by his friend's girlfriend because he did not want to have sex with her. The friend invited Pavel over to his house to drink vodka with him and his girlfriend. The girlfriend became interested in Pavel the more drunk she got. She made advances towards Pavel but as he did not reciprocate them she became upset and started hitting him and screaming. Pavels friend came over to see what was going on. He might have started hitting Pavel as well. Pavel ended up unconscious on the floor. The girlfriend then unzipped Pavel's pants and cut off his penis. She then stabbed him in the chest. Pavels body was then thrown out in the street. The other two continued drinking. They were arrested the next morning[9]. - In 2003, Alfonse Mumbo, 38, a Kenyan villager, cut off his penis and testicles in order to punish his wife for adultery. - In 2003, a German student known just as "Andreas W", from Halle cut off his own penis and tongue with a pair of garden shears while under the influence of the deleriant drug datura. Neither organ was re-attached successfully. - In 2004, in Kassel, Germany, a 50 year old woman severed the penis of her Ghanaian ex-husband but died as a result of wounds inflicted by the same knife. The man's organ was later retrieved from the same room in which she died, though it is unknown as to whether it was re-attached or not[10]. Fortean Times later reported that the court was told that the man had severed his own penis before attacking his ex-wife. - In October 2004, Dr. Naum Ciomu chopped patient Nelu Radonescu's penis into small pieces in a fit of anger during routine surgery for a testicular malformation.[11] He was ultimately found guilty of grievous bodily harm, fined and received a 1 year suspended jail sentence for the attack. [12] The victim ultimately had reconstructive surgery using tissue from his arm. - On November 2004, Manit Srithammathan cut off two teenage boys' penises and threw them in a canal. When the police questioned Srithammathan, he said he had cut off and disposed of their penises because the boys refused to confess to stealing $1,250 from his ATM account after they were shown videotape evidence of their theft. [2] - In February 2005, Ms. Kim Tran (Kim Tran #2), a Vietnamese immigrant in Alaska, severed her boyfriend's penis with a kitchen knife, after tying him to the windowsill. The severed organ was flushed down the toilet but retrieved and successfully reattached. Ms. Tran was convicted on charges of serious assault with a weapon, but charges of tampering with evidence and sexual assault were dropped. - In February 2005, Spanish surgeons reconstructed the penises of two Kenyan boys whose organs were cut off by witch doctors making a potion supposed to cure HIV/AIDS. - On July 23 2005, Delmy Ruiz, 49, was found guilty of aggravated assault after she had severed Rene Aramando Nuñez' penis with a knife. Ruiz said he had abused her earlier, but it was believed that she was really just jealous because he was seeing someone else. She lured him over to the house to talk about documents concerning the house that they owned together. He fell asleep while at the house. That is when she cut off his penis. The jury had been shown graphic photos of Nuñez' wounded crotch where more than 80% of his penis was completely removed save for a small stump. The penis was never recovered as it had been removed from the scene by her dog. Ruiz was sentenced to eight years in prison and fined $10,000[13]. - On 20 September 2005, the first successful penis transplant was begun in a military hospital in Guangzhou, China. A man at 44 sustained an injury that severed his penis at an accident. Despite atrophy of blood vessels and nerves after a protracted period of time had elapsed (exact length not given), the arteries, veins, nerves and the corpora spongiosa were successfully matched. After seven hours' surgery, the penis regained its function and even managed to attain erection. The extent to which the penis' function was restored and occurrence of rejection or infection remain to be seen[14]. - On March 15, 2006, Polish-American immigrant Jakub Fik, distraught over problems with a girlfriend, went on a vandalism spree; when confronted by Chicago police, he severed his own penis and threw it at the officers. He was taken into custody and sent into surgery. - In Bahrain, an Indian housemaid attacked her husband and severed his penis because of his alleged infidelity. She then threw the penis out their apartment window and into the street. - In the 1990s a man featured on the Jerry Springer talk-show had desired to become a woman and so severed his own penis and hid it from his wife. - On April 22 2007, a man cut off his penis with a knife in a packed London restaurant.[15]. - In 2007, Li Gengbao's penis was cut off by his wife when she found out he was cheating on her. After hearing Li Gengbao plead for his penis back, the wife threw it out the window where the neighbor's dog ate it. [3]
https://www.wikidoc.org/index.php/Penis_removal
7c1b54565d30eceeec8cc14602026c69f3ffbdc3
wikidoc
Pentachromacy
Pentachromacy Pentachromacy is the condition of possessing five independent channels for conveying color information. Organisms with pentachromacy are called pentachromats. For these organisms, the perceptual effect of any arbitrarily chosen light from its visible spectrum can be matched by a mixture of no more than five different pure spectral lights. The normal explanation of pentachromacy is that the organism's retina contains five types of cone cells with different absorption spectra. In practice the number of such receptor types may be greater than five, since different types may be active at different light intensities. Some birds (notably pigeons) and butterflies have five or more kinds of color receptors in their retinae, and are therefore believed to be pentachromats, though psychophysical evidence of functional pentachromacy is not easy to come by. As with tetrachromacy, it is suggested that women carriers of genes for both mild forms of color blindness, deuteranomaly and protanomaly, are born with five different types of color-sensing cones though the red- and green-deficient cones are later lost.
Pentachromacy Pentachromacy is the condition of possessing five independent channels for conveying color information. Organisms with pentachromacy are called pentachromats. For these organisms, the perceptual effect of any arbitrarily chosen light from its visible spectrum can be matched by a mixture of no more than five different pure spectral lights. The normal explanation of pentachromacy is that the organism's retina contains five types of cone cells with different absorption spectra. In practice the number of such receptor types may be greater than five, since different types may be active at different light intensities. Some birds (notably pigeons) and butterflies have five or more kinds of color receptors in their retinae, and are therefore believed to be pentachromats[1], though psychophysical evidence of functional pentachromacy is not easy to come by. As with tetrachromacy, it is suggested that women carriers of genes for both mild forms of color blindness, deuteranomaly and protanomaly, are born with five different types of color-sensing cones though the red- and green-deficient cones are later lost. Template:Color vision
https://www.wikidoc.org/index.php/Pentachromacy
e8f248881bd24652726284bb26e17cb8951f8374
wikidoc
Peptidoglycan
Peptidoglycan Peptidoglycan, also known as murein, is a polymer consisting of sugars and amino acids that forms a mesh-like layer outside the plasma membrane of eubacteria. The sugar component consists of alternating residues of β-(1,4) linked N-acetylglucosamine and N-acetylmuramic acid residues. Attached to the N-acetylmuramic acid is a peptide chain of three to five amino acids. The peptide chain can be cross-linked to the peptide chain of another strand forming the 3D mesh-like layer. Some Archaea have a similar layer of pseudopeptidoglycan. Peptidoglycan serves a structural role in the bacterial cell wall, giving structural strength, as well as counteracting the osmotic pressure of the cytoplasm. A common misconception is that peptidoglycan gives the cell its shape; however, whereas peptidoglycan helps maintain the structure of the cell, it is actually the MreB protein that facilitates cell shape. Peptidoglycan is also involved in binary fission during bacterial cell reproduction. The peptidoglycan layer is substantially thicker in Gram-positive bacteria (20 to 80 nm) than in Gram-negative bacteria (7 to 8 nm), with the attachment of the S-layer. Peptidoglycan forms around 90% of the dry weight of Gram-positive bacteria but only 10% of Gram-negative strains. In Gram-positive strains, it is important in attachment roles and sterotyping purposes. # Antibiotic inhibition Some antibacterial drugs such as penicillin interfere with the production of peptidoglycan by binding to bacterial enzymes known as penicillin-binding proteins or transpeptidases. Penicillin-binding proteins form the bonds between oligopeptide crosslinks in peptidoglycan. For a bacterial cell to reproduce through binary fission, more than a million peptidoglycan subunits (NAM-NAG+oligopeptide) must be attached to existing subunits. Mutations in transpeptidases that lead to reduced interactions with an antibiotic are a significant source of emerging antibiotic resistance. Considered the human body's own antibiotic, lysozymes found in tears work by breaking the β-(1,4)-glycosidic bonds in peptidoglycan (see below) and thereby destroying many bacterial cells. Antibiotics such as penicillin commonly target bacterial cell wall formation (of which peptidoglycan is an important component) because animal cells do not have cell walls. # Structure The peptidoglycan layer in the bacterial cell wall is a crystal lattice structure formed from linear chains of two alternating amino sugars, namely N-acetylglucosamine (GlcNAc or NAG) and N-acetylmuramic acid (MurNAc or NAM). The alternating sugars are connected by a β-(1,4)-glycosidic bond. Each MurNAc is attached to a short (4- to 5-residue) amino acid chain, normally containing D-alanine, D-glutamic acid, and mesodiaminopimelic acid. These three amino acids do not occur in proteins and are thought to help protect against attacks by most peptidases. Cross-linking between amino acids in different linear amino sugar chains by an enzyme known as transpeptidase result in a 3-dimensional structure that is strong and rigid. The specific amino acid sequence and molecular structure vary with the bacterial species.
Peptidoglycan Peptidoglycan, also known as murein, is a polymer consisting of sugars and amino acids that forms a mesh-like layer outside the plasma membrane of eubacteria. The sugar component consists of alternating residues of β-(1,4) linked N-acetylglucosamine and N-acetylmuramic acid residues. Attached to the N-acetylmuramic acid is a peptide chain of three to five amino acids. The peptide chain can be cross-linked to the peptide chain of another strand forming the 3D mesh-like layer. Some Archaea have a similar layer of pseudopeptidoglycan. Peptidoglycan serves a structural role in the bacterial cell wall, giving structural strength, as well as counteracting the osmotic pressure of the cytoplasm. A common misconception is that peptidoglycan gives the cell its shape; however, whereas peptidoglycan helps maintain the structure of the cell, it is actually the MreB protein that facilitates cell shape. Peptidoglycan is also involved in binary fission during bacterial cell reproduction.[1] The peptidoglycan layer is substantially thicker in Gram-positive bacteria (20 to 80 nm) than in Gram-negative bacteria (7 to 8 nm), with the attachment of the S-layer. Peptidoglycan forms around 90% of the dry weight of Gram-positive bacteria but only 10% of Gram-negative strains. In Gram-positive strains, it is important in attachment roles and sterotyping purposes.[2] # Antibiotic inhibition Some antibacterial drugs such as penicillin interfere with the production of peptidoglycan by binding to bacterial enzymes known as penicillin-binding proteins or transpeptidases[2]. Penicillin-binding proteins form the bonds between oligopeptide crosslinks in peptidoglycan. For a bacterial cell to reproduce through binary fission, more than a million peptidoglycan subunits (NAM-NAG+oligopeptide) must be attached to existing subunits.[3] Mutations in transpeptidases that lead to reduced interactions with an antibiotic are a significant source of emerging antibiotic resistance.[4] Considered the human body's own antibiotic, lysozymes found in tears work by breaking the β-(1,4)-glycosidic bonds in peptidoglycan (see below) and thereby destroying many bacterial cells. Antibiotics such as penicillin commonly target bacterial cell wall formation (of which peptidoglycan is an important component) because animal cells do not have cell walls. # Structure The peptidoglycan layer in the bacterial cell wall is a crystal lattice structure formed from linear chains of two alternating amino sugars, namely N-acetylglucosamine (GlcNAc or NAG) and N-acetylmuramic acid (MurNAc or NAM). The alternating sugars are connected by a β-(1,4)-glycosidic bond. Each MurNAc is attached to a short (4- to 5-residue) amino acid chain, normally containing D-alanine, D-glutamic acid, and mesodiaminopimelic acid. These three amino acids do not occur in proteins and are thought to help protect against attacks by most peptidases. Cross-linking between amino acids in different linear amino sugar chains by an enzyme known as transpeptidase result in a 3-dimensional structure that is strong and rigid. The specific amino acid sequence and molecular structure vary with the bacterial species.[5]
https://www.wikidoc.org/index.php/Peptidoglycan
5f7acf3065cf7b215b2db7a90714e7d4fca3baad
wikidoc
Pericoronitis
Pericoronitis # Overview Pericoronitis is a common problem in young adults with partial tooth impactions. It occurs when the tissue around the wisdom tooth has become infected because bacteria have invaded the area. Food impaction and caries (tooth cavity) are also problems associated with third molar pain. Treatment for minor symptoms of pericoronitis (spontaneous pain, localized swelling, purulence/drainage, foul taste) is irrigation. Major symptoms of pericoronitis (difficulty swallowing, enlarged lymph nodes, fever, limited mouth opening, facial cellulitis/infection) are usually treated with antibiotics. In most instances the symptoms will recur and the only definitive treatment is extraction. If left untreated, however, recurring infections are likely, and the infection can eventually spread to other areas of the mouth. The most severe cases are treated in a hospital and may require intravenous antibiotics and surgery. The removal of the wisdom tooth (extraction) should occur at a time when the "infection" is not present, as extracting this tooth during the time of the acute/painful infection can cause the infection to spread to dangerous area around the throat. Therefore, a dentist will usually clean the area +/- prescribe antibiotics and wait for it to calm down until scheduling the extraction. nl:Pericoronitis
Pericoronitis Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1] # Overview Pericoronitis is a common problem in young adults with partial tooth impactions. It occurs when the tissue around the wisdom tooth has become infected because bacteria have invaded the area. Food impaction and caries (tooth cavity) are also problems associated with third molar pain. Treatment for minor symptoms of pericoronitis (spontaneous pain, localized swelling, purulence/drainage, foul taste) is irrigation. Major symptoms of pericoronitis (difficulty swallowing, enlarged lymph nodes, fever, limited mouth opening, facial cellulitis/infection) are usually treated with antibiotics. In most instances the symptoms will recur and the only definitive treatment is extraction. If left untreated, however, recurring infections are likely, and the infection can eventually spread to other areas of the mouth. The most severe cases are treated in a hospital and may require intravenous antibiotics and surgery. The removal of the wisdom tooth (extraction) should occur at a time when the "infection" is not present, as extracting this tooth during the time of the acute/painful infection can cause the infection to spread to dangerous area around the throat. Therefore, a dentist will usually clean the area +/- prescribe antibiotics and wait for it to calm down until scheduling the extraction. nl:Pericoronitis Template:WikiDoc Sources
https://www.wikidoc.org/index.php/Pericoronitis
fadb7fa113d94505d746bac1c13895480be11e42
wikidoc
Perihepatitis
Perihepatitis # Overview Perihepatitis is inflammation of the serous or peritoneal coating of the liver. Perihepatitis is often caused by one of the inflammatory disorders of the female upper genital tract, known collectively as Pelvic inflammatory disease. Some patients have sharp right upper abdominal quadrant pain. One of the complications of perihepatitis is Fitz-Hugh-Curtis syndrome. Common bacterial causes for this disease are Chlamydia trachomatis and Neisseria gonorrhoeae.
Perihepatitis Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1] # Overview Perihepatitis is inflammation of the serous or peritoneal coating of the liver. Perihepatitis is often caused by one of the inflammatory disorders of the female upper genital tract, known collectively as Pelvic inflammatory disease. Some patients have sharp right upper abdominal quadrant pain. One of the complications of perihepatitis is Fitz-Hugh-Curtis syndrome. Common bacterial causes for this disease are Chlamydia trachomatis and Neisseria gonorrhoeae.
https://www.wikidoc.org/index.php/Perihepatitis
03e62b69df4ba6ef47b328084fd2e6ea54d9733d
wikidoc
Perineal tear
Perineal tear # Overview In obstetrics, a perineal tear is a spontaneous (unintended) laceration of the skin and other soft tissue structures which, in women, separate the vagina from the anus. Perineal tears mainly occur in women as a result of vaginal childbirth, which strains the perineum. Tears vary widely in severity. The majority are superficial and require no treatment, but severe tears can cause significant bleeding, long-term pain or dysfunction. A perineal tear is distinct from an episiotomy, in which the perineum is intentionally lacerated to facilitate delivery. # Anatomy In a woman, the anus and the vaginal opening lie within the anatomical region known as the perineum. Each opening is surrounded by a wall, and the anal wall is separated from the vaginal wall by a mass of soft tissue including: - The muscles of the anus (corrugator cutis ani, the internal anal sphincter and the external anal sphincter) - The medial muscles of the urogenital region (the superficial transverse perineal muscle, the deep transverse perineal muscle and bulbocavernosus) - The medial levator ani muscles (puborectalis and pubococcygeus) - The fascia of perineum, which covers these muscles - The overlying skin and subcutaneous tissue A perineal tear may involve some or all of these structures, which normally aid in supporting the pelvic organs and maintaining faecal continence. # Classification Tears are classified into four categories: - First-degree tear: laceration is limited to the fourchette and superficial perineal skin or vaginal mucosa - Second-degree tear: laceration extends beyond fourchette, perineal skin and vaginal mucosa to perineal muscles and fascia, but not the anal sphincter - Third-degree tear: fourchette, perineal skin, vaginal mucosa, muscles, and anal sphincter are torn; third-degree tears may be further subdivided into three subcategories: 3a: partial tear of the external anal sphincter involving less than 50% thickness 3b: greater than 50% tear of the external anal sphincter 3c: internal sphincter is torn - 3a: partial tear of the external anal sphincter involving less than 50% thickness - 3b: greater than 50% tear of the external anal sphincter - 3c: internal sphincter is torn - Fourth-degree tear: fourchette, perineal skin, vaginal mucosa, muscles, anal sphincter, and rectal mucosa are torn # Cause In humans and some other primates, the head of the term fetus is so large in comparison to the size of the birth canal that term delivery is rarely possible without some degree of trauma. As the head passes through the pelvis, the soft tissues are stretched and compressed. The risk of severe tear is greatly increased if the fetal head is oriented occiput posterior (face forward), if the mother has not given birth before or if the fetus is large. # Prevention The risk of perineal tear is reduced by the use of medio-lateral episiotomy, although this procedure is also traumatic. Epidural anaesthesia and induction of labour also reduce the risk. Instrumentation (the use of forceps or ventouse) reduces the risk if the fetus is in the occiput anterior (normal) position. Several other techniques are used to reduce the risk of tearing, but with little evidence for efficacy. Antenatal digital perineal massage is often advocated, and may reduce the risk of trauma only in nulliparous women. ‘Hands on’ techniques employed by midwives, in which the foetal head is guided through the vagina at a controlled rate have been widely advocated, but their efficacy is unclear. Waterbirth and labouring in water are popular for several reasons, and it has been suggested that by softening the perineum they might reduce the rate of tearing. However, this effect has never been clearly demonstrated. The ‘Epi-no birth trainer’, a relatively recent invention, is a device specifically designed to strengthen and stretch the perineum during pregnancy. In spite of some promising studies, systematic review has shown no effect on the rate of tearing or episiotomy. # Prevalence Over 85% of women having a vaginal birth sustain some form of perineal trauma, and 60-70% receive stitches. A retrospective study of 8603 vaginal deliveries found a third degree tear had been clinically diagnosed in only 50 women (0.6%). However, when the same authors used anal endosonography in a consecutive group of 202 deliveries, there was evidence of third degree tears in 35% of first-time mothers and 44% of mothers with previous children. These numbers are confirmed by other researchers. # Complications First and second degree tears rarely cause long-term problems. Among women who experience a third or fourth degree tear, 60-80% are asymptomatic after 12 months. Faecal incontinence, faecal urgency, chronic perineal pain and dyspareunia occur in a minority of patients, but may be permanent. The symptoms associated with perineal tear are not always due to the tear itself, since there are often other injuries, such as avulsion of pelvic floor muscles, that are not evident on examination. # Insurance Coverage A study by the Agency for Healthcare Research and Quality (AHRQ) found that in 2011, first- and second-degree perineal tear was the most common complicating condition for vaginal deliveries in the U.S. among women covered by either private insurance or Medicaid. Second-degree perineal laceration rates were higher for women covered by private insurance than for women covered by Medicaid.
Perineal tear Editor-In-Chief: C. Michael Gibson, M.S., M.D. [5] # Overview In obstetrics, a perineal tear is a spontaneous (unintended) laceration of the skin and other soft tissue structures which, in women, separate the vagina from the anus. Perineal tears mainly occur in women as a result of vaginal childbirth, which strains the perineum. Tears vary widely in severity. The majority are superficial and require no treatment, but severe tears can cause significant bleeding, long-term pain or dysfunction. A perineal tear is distinct from an episiotomy, in which the perineum is intentionally lacerated to facilitate delivery. # Anatomy In a woman, the anus and the vaginal opening lie within the anatomical region known as the perineum. Each opening is surrounded by a wall, and the anal wall is separated from the vaginal wall by a mass of soft tissue including: - The muscles of the anus (corrugator cutis ani, the internal anal sphincter and the external anal sphincter) - The medial muscles of the urogenital region (the superficial transverse perineal muscle, the deep transverse perineal muscle and bulbocavernosus) - The medial levator ani muscles (puborectalis and pubococcygeus) - The fascia of perineum, which covers these muscles - The overlying skin and subcutaneous tissue[1] A perineal tear may involve some or all of these structures, which normally aid in supporting the pelvic organs and maintaining faecal continence.[2] # Classification Tears are classified into four categories:[3][4] - First-degree tear: laceration is limited to the fourchette and superficial perineal skin or vaginal mucosa - Second-degree tear: laceration extends beyond fourchette, perineal skin and vaginal mucosa to perineal muscles and fascia, but not the anal sphincter - Third-degree tear: fourchette, perineal skin, vaginal mucosa, muscles, and anal sphincter are torn; third-degree tears may be further subdivided into three subcategories:[5] 3a: partial tear of the external anal sphincter involving less than 50% thickness 3b: greater than 50% tear of the external anal sphincter 3c: internal sphincter is torn - 3a: partial tear of the external anal sphincter involving less than 50% thickness - 3b: greater than 50% tear of the external anal sphincter - 3c: internal sphincter is torn - Fourth-degree tear: fourchette, perineal skin, vaginal mucosa, muscles, anal sphincter, and rectal mucosa are torn # Cause In humans and some other primates, the head of the term fetus is so large in comparison to the size of the birth canal that term delivery is rarely possible without some degree of trauma.[6] As the head passes through the pelvis, the soft tissues are stretched and compressed. The risk of severe tear is greatly increased if the fetal head is oriented occiput posterior (face forward), if the mother has not given birth before or if the fetus is large.[7] # Prevention The risk of perineal tear is reduced by the use of medio-lateral episiotomy, although this procedure is also traumatic. Epidural anaesthesia and induction of labour also reduce the risk. Instrumentation (the use of forceps or ventouse) reduces the risk if the fetus is in the occiput anterior (normal) position.[8] Several other techniques are used to reduce the risk of tearing, but with little evidence for efficacy. Antenatal digital perineal massage is often advocated, and may reduce the risk of trauma only in nulliparous women.[9] ‘Hands on’ techniques employed by midwives, in which the foetal head is guided through the vagina at a controlled rate have been widely advocated, but their efficacy is unclear.[10] Waterbirth and labouring in water are popular for several reasons, and it has been suggested that by softening the perineum they might reduce the rate of tearing. However, this effect has never been clearly demonstrated.[11] The ‘Epi-no birth trainer’, a relatively recent invention, is a device specifically designed to strengthen and stretch the perineum during pregnancy. In spite of some promising studies, systematic review has shown no effect on the rate of tearing or episiotomy.[12] # Prevalence Over 85% of women having a vaginal birth sustain some form of perineal trauma, and 60-70% receive stitches.[13] A retrospective study of 8603 vaginal deliveries found a third degree tear had been clinically diagnosed in only 50 women (0.6%).[14] However, when the same authors used anal endosonography in a consecutive group of 202 deliveries, there was evidence of third degree tears in 35% of first-time mothers and 44% of mothers with previous children.[15] These numbers are confirmed by other researchers.[16] # Complications First and second degree tears rarely cause long-term problems. Among women who experience a third or fourth degree tear, 60-80% are asymptomatic after 12 months.[17] Faecal incontinence, faecal urgency, chronic perineal pain and dyspareunia occur in a minority of patients, but may be permanent.[18] The symptoms associated with perineal tear are not always due to the tear itself, since there are often other injuries, such as avulsion of pelvic floor muscles, that are not evident on examination.[19] # Insurance Coverage A study by the Agency for Healthcare Research and Quality (AHRQ) found that in 2011, first- and second-degree perineal tear was the most common complicating condition for vaginal deliveries in the U.S. among women covered by either private insurance or Medicaid.[20] Second-degree perineal laceration rates were higher for women covered by private insurance than for women covered by Medicaid.[21]
https://www.wikidoc.org/index.php/Perineal_tear
5342c45f3bbe1e76143911f35c94b84c0e8ccc38
wikidoc
Period (gene)
Period (gene) Period (per) is a gene located on the X chromosome of Drosophila melanogaster. Oscillations in levels of both per transcript and its corresponding protein PER have a period of approximately 24 hours and together play a central role in the molecular mechanism of the Drosophila biological clock driving circadian rhythms in eclosion and locomotor activity. Mutations in the per gene can shorten (perS), lengthen (perL), and even abolish (per0) the period of the circadian rhythm. # Discovery The period gene and three mutants (perS, perL, and per0) were isolated in an EMS mutagenesis screen by Ronald Konopka and Seymour Benzer in 1971. The perS, perL, and per0 mutations were found to not complement each other, so it was concluded that the three phenotypes were due to mutations in the same gene. The discovery of mutants that altered the period of circadian rhythms in eclosion and locomotor activity (perS and perL) indicated the role of the per gene in the clock itself and not an output pathway. The period gene was first sequenced in 1984 by Michael Rosbash and colleagues. In 1998, it was discovered that per produces two transcripts (differing only by the alternative splicing of a single untranslated intron) which both encode the PER protein. # Function ## Circadian clock In Drosophila, per mRNA levels oscillate with a period of approximately 24 hours, peaking during the early subjective night. The per product PER also oscillates with a nearly 24-hour period, peaking about six hours after per mRNA levels during the middle subjective night. When PER levels increase, the inhibition of per transcription increases, lowering the protein levels. However, because PER protein cannot directly bind to DNA, it does not directly influence its own transcription; alternatively, it inhibits its own activators. After PER is produced from per mRNA, it dimerizes with Timeless (TIM) and the complex goes into the nucleus and inhibits the transcription factors of per and tim, the CLOCK/CYCLE heterodimer. This CLOCK/CYCLE complex acts as a transcriptional activator for per and tim by binding to specific enhancers (called E-boxes) of their promoters. Therefore, inhibition of CLK/CYC lowers per and tim mRNA levels, which in turn lower the levels of PER and TIM. Now, cryptochrome (CRY) is a light sensitive protein which inhibits TIM in the presence of light. When TIM is not complexed with PER, another protein, doubletime, or DBT, phosphorylates PER, targeting it for degradation. In mammals, an analogous transcription-translation negative feedback loop is observed. Translated from the three mammalian homologs of drosophila-per, one of three PER proteins (PER1, PER2, and PER3) dimerizes via its PAS domain with one of two cryptochrome proteins (CRY1 and CRY2) to form a negative element of the clock. This PER/CRY complex moves into the nucleus upon phosphorylation by CK1-epsilon (casein kinase 1 epsilon) and inhibits the CLK/BMAL1 heterodimer, the transcription factor that is bound to the E-boxes of the three per and two cry promoters by basic helix-loop-helix (BHLH) DNA-binding domains. The mammalian period 1 and period 2 genes play key roles in photoentrainment of the circadian clock to light pulses. This was first seen in 1999 when Akiyama et al. showed that mPer1 is necessary for phase shifts induced by light or glutamate release. Two years later, Albrecht et al. found genetic evidence to support this result when they discovered that mPer1 mutants are not able to advance the clock in response to a late-night light pulse (ZT22) and that mPer2 mutants are not able to delay the clock in response to an early night light pulse (ZT14). Thus, mPer1 and mPer2 are necessary for the daily resetting of the circadian clock to normal environmental light cues. per has also been implicated in the regulation of several output processes of the biological clock, including mating activity and oxidative stress response, through per mutation and knockout experiments. Drosphila melanogaster has naturally occurring variation in Thr-Gly repeats, occurring along a latitude cline. Flies with 17 Thr-Gly repeats are found more commonly in Southern Europe and 20 Thr-Gly repeats are found more commonly in Northern Europe. ## Non-circadian In addition to its circadian functions, per has also been implicated in a variety of other non-circadian processes. The mammalian period 2 gene plays a key role in tumor growth in mice; mice with an mPer2 knockout show a significant increase in tumor development and a significant decrease in apoptosis. This is thought to be caused by mPer2 circadian deregulation of common tumor suppression and cell cycle regulation genes, such as Cyclin D1, Cyclin A, Mdm-2, and Gadd45α, as well as the transcription factor c-myc, which is directly controlled by circadian regulators through E box-mediated reactions. In addition, mPer2 knockout mice show increased sensitivity to gamma radiation and tumor development, further implicating mPer2 in cancer development through its regulation of DNA damage-responsive pathways. Thus, circadian control of clock controlled genes that function in cell growth control and DNA damage response may affect the development of cancer in vivo. per has been shown to be necessary and sufficient for long-term memory (LTM) formation in Drosophila melanogaster. per mutants show deficiencies in LTM formation that can be rescued with the insertion of a per transgene and enhanced with overexpression of the per gene. This response is absent in mutations of other clock genes (timeless, dClock, and cycle). Research suggests that synaptic transmission through per-expressing cells is necessary for LTM retrieval. per has also been shown to extend the lifespan of the fruit fly, suggesting a role in aging. This result, however, is still controversial, as the experiments have not been successfully repeated by another research group. In mice it has been shown that there is a link between per2 and preferred alcohol intake. Alcohol consumption has also been linked to shortening the free running period. The effect of alcoholism on per1 and per2 genes have also linked to the depression associated with alcohol as well as an individual's disposition to relapse into alcoholism. # Mammalian homologs of per In mammals, there are three known PER family genes: PER1, PER2, and PER3. The mammalian molecular clock has homologs to the proteins found in Drosophila. A homolog of CLOCK plays the same role in the human clock, and CYC is replaced by BMAL1. CRY has two human homologs, CRY1 and CRY2. A computational model for model has been developed by Jean-Christophe Leloup and Adam Goldbeter to simulate the feedback loop created by the interactions between these proteins and genes, including the per gene and PER protein. The human homologs show sequence and amino acid similarity to Drosophila Per and also contain the PAS domain and nuclear localization sequences that the Drosophila Per have. The human proteins are expressed rhythmically in the suprachiasmatic nucleus as well as areas outside the SCN. Additionally, while Drosophila PER moves between the cytoplasm and the nucleus, mammalian PER is more compartmentalized: mPer1 primarily localizes to the nucleus and mPer2 to the cytoplasm. ## Clinical significance Familial advanced sleep-phase syndrome known to be associated with mutations in the mammalian Per2 gene. People suffering from the disorder have a shorter period and advanced phase where they go to sleep in the early evening (around 7pm) and wake up before sunrise (around 4am). In 2006, a lab in Germany identified particular phosphorylated residues of PER2 that are mutated in people suffering of FASPS. Chronotherapy is sometimes used as a treatment, as an attempt to alter the phase of the individual's clock using cycles of bright light.
Period (gene) Period (per) is a gene located on the X chromosome of Drosophila melanogaster. Oscillations in levels of both per transcript and its corresponding protein PER have a period of approximately 24 hours and together play a central role in the molecular mechanism of the Drosophila biological clock driving circadian rhythms in eclosion and locomotor activity.[1][2] Mutations in the per gene can shorten (perS), lengthen (perL), and even abolish (per0) the period of the circadian rhythm.[1] # Discovery The period gene and three mutants (perS, perL, and per0) were isolated in an EMS mutagenesis screen by Ronald Konopka and Seymour Benzer in 1971.[3] The perS, perL, and per0 mutations were found to not complement each other, so it was concluded that the three phenotypes were due to mutations in the same gene.[3] The discovery of mutants that altered the period of circadian rhythms in eclosion and locomotor activity (perS and perL) indicated the role of the per gene in the clock itself and not an output pathway. The period gene was first sequenced in 1984 by Michael Rosbash and colleagues.[4] In 1998, it was discovered that per produces two transcripts (differing only by the alternative splicing of a single untranslated intron) which both encode the PER protein.[5] # Function ## Circadian clock In Drosophila, per mRNA levels oscillate with a period of approximately 24 hours, peaking during the early subjective night.[1] The per product PER also oscillates with a nearly 24-hour period, peaking about six hours after per mRNA levels during the middle subjective night.[6][citation needed] When PER levels increase, the inhibition of per transcription increases, lowering the protein levels. However, because PER protein cannot directly bind to DNA, it does not directly influence its own transcription; alternatively, it inhibits its own activators.[7] After PER is produced from per mRNA, it dimerizes with Timeless (TIM) and the complex goes into the nucleus and inhibits the transcription factors of per and tim, the CLOCK/CYCLE heterodimer.[7] This CLOCK/CYCLE complex acts as a transcriptional activator for per and tim by binding to specific enhancers (called E-boxes) of their promoters.[7][8] Therefore, inhibition of CLK/CYC lowers per and tim mRNA levels, which in turn lower the levels of PER and TIM.[7] Now, cryptochrome (CRY) is a light sensitive protein which inhibits TIM in the presence of light.[9] When TIM is not complexed with PER, another protein, doubletime, or DBT, phosphorylates PER, targeting it for degradation.[10] In mammals, an analogous transcription-translation negative feedback loop is observed.[11] Translated from the three mammalian homologs of drosophila-per, one of three PER proteins (PER1, PER2, and PER3) dimerizes via its PAS domain with one of two cryptochrome proteins (CRY1 and CRY2) to form a negative element of the clock.[11] This PER/CRY complex moves into the nucleus upon phosphorylation by CK1-epsilon (casein kinase 1 epsilon) and inhibits the CLK/BMAL1 heterodimer, the transcription factor that is bound to the E-boxes of the three per and two cry promoters by basic helix-loop-helix (BHLH) DNA-binding domains.[11] The mammalian period 1 and period 2 genes play key roles in photoentrainment of the circadian clock to light pulses.[12][13] This was first seen in 1999 when Akiyama et al. showed that mPer1 is necessary for phase shifts induced by light or glutamate release.[12] Two years later, Albrecht et al. found genetic evidence to support this result when they discovered that mPer1 mutants are not able to advance the clock in response to a late-night light pulse (ZT22) and that mPer2 mutants are not able to delay the clock in response to an early night light pulse (ZT14).[13] Thus, mPer1 and mPer2 are necessary for the daily resetting of the circadian clock to normal environmental light cues.[13] per has also been implicated in the regulation of several output processes of the biological clock, including mating activity[14] and oxidative stress response,[15] through per mutation and knockout experiments. Drosphila melanogaster has naturally occurring variation in Thr-Gly repeats, occurring along a latitude cline. Flies with 17 Thr-Gly repeats are found more commonly in Southern Europe and 20 Thr-Gly repeats are found more commonly in Northern Europe.[16] ## Non-circadian In addition to its circadian functions, per has also been implicated in a variety of other non-circadian processes. The mammalian period 2 gene plays a key role in tumor growth in mice; mice with an mPer2 knockout show a significant increase in tumor development and a significant decrease in apoptosis.[17] This is thought to be caused by mPer2 circadian deregulation of common tumor suppression and cell cycle regulation genes, such as Cyclin D1, Cyclin A, Mdm-2, and Gadd45α, as well as the transcription factor c-myc, which is directly controlled by circadian regulators through E box-mediated reactions.[17] In addition, mPer2 knockout mice show increased sensitivity to gamma radiation and tumor development, further implicating mPer2 in cancer development through its regulation of DNA damage-responsive pathways.[17] Thus, circadian control of clock controlled genes that function in cell growth control and DNA damage response may affect the development of cancer in vivo.[17] per has been shown to be necessary and sufficient for long-term memory (LTM) formation in Drosophila melanogaster. per mutants show deficiencies in LTM formation that can be rescued with the insertion of a per transgene and enhanced with overexpression of the per gene.[18] This response is absent in mutations of other clock genes (timeless, dClock, and cycle).[18] Research suggests that synaptic transmission through per-expressing cells is necessary for LTM retrieval.[18] per has also been shown to extend the lifespan of the fruit fly, suggesting a role in aging.[19] This result, however, is still controversial, as the experiments have not been successfully repeated by another research group. In mice it has been shown that there is a link between per2 and preferred alcohol intake.[20] Alcohol consumption has also been linked to shortening the free running period.[21] The effect of alcoholism on per1 and per2 genes have also linked to the depression associated with alcohol as well as an individual's disposition to relapse into alcoholism.[21] # Mammalian homologs of per In mammals, there are three known PER family genes: PER1, PER2, and PER3. The mammalian molecular clock has homologs to the proteins found in Drosophila. A homolog of CLOCK plays the same role in the human clock, and CYC is replaced by BMAL1.[7] CRY has two human homologs, CRY1 and CRY2.[22] A computational model for model has been developed by Jean-Christophe Leloup and Adam Goldbeter to simulate the feedback loop created by the interactions between these proteins and genes, including the per gene and PER protein.[23] The human homologs show sequence and amino acid similarity to Drosophila Per and also contain the PAS domain and nuclear localization sequences that the Drosophila Per have. The human proteins are expressed rhythmically in the suprachiasmatic nucleus as well as areas outside the SCN. Additionally, while Drosophila PER moves between the cytoplasm and the nucleus, mammalian PER is more compartmentalized: mPer1 primarily localizes to the nucleus and mPer2 to the cytoplasm.[24] ## Clinical significance Familial advanced sleep-phase syndrome known to be associated with mutations in the mammalian Per2 gene. People suffering from the disorder have a shorter period and advanced phase where they go to sleep in the early evening (around 7pm) and wake up before sunrise (around 4am). In 2006, a lab in Germany identified particular phosphorylated residues of PER2 that are mutated in people suffering of FASPS.[25] Chronotherapy is sometimes used as a treatment, as an attempt to alter the phase of the individual's clock using cycles of bright light.
https://www.wikidoc.org/index.php/Period_(gene)
93d154ba9768052c815a148b714ca07461d387f2
wikidoc
Periodic acid
Periodic acid Periodic acid is HIO4 or H5IO6. In dilute solution, periodic acid exists as H+ and IO4−. When more concentrated, orthoperiodic acid, H5IO6, is formed. This can be obtained as a crystalline solid. Orthoperiodic acid can be dehydrated to metaperiodic acid, HIO4. Further heating gives diiodine pentoxide (I2O5) and oxygen; apparently the anhydride 'diiodine heptoxide' does not exist in nature but can be formed synthetically. Thus, two forms of periodates exist. One relating to the acid HIO4, the other relating to H5IO6. The former results in metaperiodates (meta- meaning less water) and the latter, orthoperiodates (ortho- meaning more water). Metaperiodates have solubilities and chemical properties similar to perchlorates (similar but larger ion size) though they are less oxidizing than perchlorates. Periodic acid is also used in organic chemistry for structural analysis. Periodic acid will cleave a vicinal diol into two aldehyde fragments. This can be useful in determining the structure of carbohydrates. # Notes and references - ↑ The name is not derived from "period", but from "iodine": per-iodic acid (compare iodic acid, perchloric acid), and it should thus be pronounced per-iodic and not as in the usual meaning of periodic. de:Periodsäure it:Acido periodico
Periodic acid Template:Chembox new Periodic acid[1] is HIO4 or H5IO6. In dilute solution, periodic acid exists as H+ and IO4−. When more concentrated, orthoperiodic acid, H5IO6, is formed. This can be obtained as a crystalline solid. Orthoperiodic acid can be dehydrated to metaperiodic acid, HIO4. Further heating gives diiodine pentoxide (I2O5) and oxygen; apparently the anhydride 'diiodine heptoxide' does not exist in nature but can be formed synthetically. Thus, two forms of periodates exist. One relating to the acid HIO4, the other relating to H5IO6. The former results in metaperiodates (meta- meaning less water) and the latter, orthoperiodates (ortho- meaning more water). Metaperiodates have solubilities and chemical properties similar to perchlorates (similar but larger ion size) though they are less oxidizing than perchlorates. Periodic acid is also used in organic chemistry for structural analysis. Periodic acid will cleave a vicinal diol into two aldehyde fragments. This can be useful in determining the structure of carbohydrates. # Notes and references - ↑ The name is not derived from "period", but from "iodine": per-iodic acid (compare iodic acid, perchloric acid), and it should thus be pronounced per-iodic and not as in the usual meaning of periodic. de:Periodsäure it:Acido periodico
https://www.wikidoc.org/index.php/Periodic_acid
800432a81c04ce9520706d06ad8db75efb54bf72
wikidoc
Perpendicular
Perpendicular In geometry, two lines or planes (or a line and a plane), are considered perpendicular (or orthogonal) to each other if they form congruent adjacent angles. The term may be used as a noun or adjective. Thus, referring to Figure 1, the line AB is the perpendicular to CD through the point B. Note that by definition, a line is infinitely long, and strictly speaking AB and CD in this example represent line segments of two infinitely long lines. Hence the line segment AB does not have to intersect line segment CD to be considered perpendicular lines, because if the line segments are extended out to infinity, they would still form congruent adjacent angles. If a line is bending to another as in Figure 1, all of the angles created by their intersection are called right angles (right angles measure ½π radians, or 90°). Conversely, any lines that meet to form right angles are perpendicular. In a coordinate plane, perpendicular lines have opposite reciprocal slopes. A horizontal line has slope equal to zero while the slope of a vertical line is described as undefined or sometimes ±infinity. Two lines that are perpendicular would be denoted as Template:Perpendicular. # Numerical criteria ## In terms of slopes In a Cartesian coordinate system, two straight lines L and M may be described by equations. as long as neither is vertical. Then a and c are the slopes of the two lines. The lines L and M are perpendicular if and only if the product of their slopes is -1, or if ac=-1. The perpendiculars to vertical lines are always horizontal lines, and the perpendiculars to horizontal lines are always vertical lines. All horizontal lines are perpendicular to all vertical lines; that is, for any horizontal line P : x = J and horizontal line Q : y = K, where J and K are constants, Template:Perpendicular. # Construction of the perpendicular To construct the perpendicular to the line AB through the point P using compass and straightedge, proceed as follows (see Figure 2). - Step 1 (red): construct a circle with center at P to create points A' and B' on the line AB, which are equidistant from P. - Step 2 (green): construct circles centered at A' and B', both passing through P. Let Q be the other point of intersection of these two circles. - Step 3 (blue): connect P and Q to construct the desired perpendicular PQ. To prove that the PQ is perpendicular to AB, use the SSS congruence theorem for triangles QPA' and QPB' to conclude that angles OPA' and OPB' are equal. Then use the SAS congruence theorem for triangles OPA' and OPB' to conclude that angles POA and POB are equal. # In relationship to parallel lines As shown in Figure 3, if two lines (a and b) are both perpendicular to a third line (c), all of the angles formed on the third line are right angles. Therefore, in Euclidean geometry, any two lines that are both perpendicular to a third line are parallel to each other, because of the parallel postulate. Conversely, if one line is perpendicular to a second line, it is also perpendicular to any line parallel to that second line. In Figure 3, all of the orange-shaded angles are congruent to each other and all of the green-shaded angles are congruent to each other, because vertical angles are congruent and alternate interior angles formed by a transversal cutting parallel lines are congruent. Therefore, if lines a and b are parallel, any of the following conclusions leads to all of the others: - One of the angles in the diagram is a right angle. - One of the orange-shaded angles is congruent to one of the green-shaded angles. - Line 'c' is perpendicular to line 'a'. - Line 'c' is perpendicular to line 'b'. # Finding the perpendiculars of a function ### Algebra In algebra, for any linear equation y=mx + b, the perpendiculars will all have a slope of (-1/m), the opposite reciprocal of the original slope. It is helpful to memorize the slogan "to find the slope of the perpendicular line, flip the fraction and change the sign." Recall that any whole number a is itself over one, and can be written as (a/1) To find the perpendicular of a given line which also passes through a particular point (x, y), solve the equation y = (-1/m)x + b, substituting in the known values of m, x, and y to solve for b. ### Calculus First find the derivative of the function. This will be the slope (m) of any curve at a particular point (x, y). Then, as above, solve the equation y = (-1/m)x + b, substituting in the known values of m, x, and y to solve for b.
Perpendicular In geometry, two lines or planes (or a line and a plane), are considered perpendicular (or orthogonal) to each other if they form congruent adjacent angles. The term may be used as a noun or adjective. Thus, referring to Figure 1, the line AB is the perpendicular to CD through the point B. Note that by definition, a line is infinitely long, and strictly speaking AB and CD in this example represent line segments of two infinitely long lines. Hence the line segment AB does not have to intersect line segment CD to be considered perpendicular lines, because if the line segments are extended out to infinity, they would still form congruent adjacent angles. If a line is bending to another as in Figure 1, all of the angles created by their intersection are called right angles (right angles measure ½π radians, or 90°). Conversely, any lines that meet to form right angles are perpendicular. In a coordinate plane, perpendicular lines have opposite reciprocal slopes. A horizontal line has slope equal to zero while the slope of a vertical line is described as undefined or sometimes ±infinity. Two lines that are perpendicular would be denoted as Template:Perpendicular. # Numerical criteria ## In terms of slopes In a Cartesian coordinate system, two straight lines <math>L</math> and <math>M</math> may be described by equations. as long as neither is vertical. Then <math>a</math> and <math>c</math> are the slopes of the two lines. The lines <math>L</math> and <math>M</math> are perpendicular if and only if the product of their slopes is -1, or if <math>ac=-1</math>. The perpendiculars to vertical lines are always horizontal lines, and the perpendiculars to horizontal lines are always vertical lines. All horizontal lines are perpendicular to all vertical lines; that is, for any horizontal line <math>P : x = J</math> and horizontal line <math>Q : y = K</math>, where <math>J</math> and <math>K</math> are constants, Template:Perpendicular. # Construction of the perpendicular To construct the perpendicular to the line AB through the point P using compass and straightedge, proceed as follows (see Figure 2). - Step 1 (red): construct a circle with center at P to create points A' and B' on the line AB, which are equidistant from P. - Step 2 (green): construct circles centered at A' and B', both passing through P. Let Q be the other point of intersection of these two circles. - Step 3 (blue): connect P and Q to construct the desired perpendicular PQ. To prove that the PQ is perpendicular to AB, use the SSS congruence theorem for triangles QPA' and QPB' to conclude that angles OPA' and OPB' are equal. Then use the SAS congruence theorem for triangles OPA' and OPB' to conclude that angles POA and POB are equal. # In relationship to parallel lines As shown in Figure 3, if two lines (a and b) are both perpendicular to a third line (c), all of the angles formed on the third line are right angles. Therefore, in Euclidean geometry, any two lines that are both perpendicular to a third line are parallel to each other, because of the parallel postulate. Conversely, if one line is perpendicular to a second line, it is also perpendicular to any line parallel to that second line. In Figure 3, all of the orange-shaded angles are congruent to each other and all of the green-shaded angles are congruent to each other, because vertical angles are congruent and alternate interior angles formed by a transversal cutting parallel lines are congruent. Therefore, if lines a and b are parallel, any of the following conclusions leads to all of the others: - One of the angles in the diagram is a right angle. - One of the orange-shaded angles is congruent to one of the green-shaded angles. - Line 'c' is perpendicular to line 'a'. - Line 'c' is perpendicular to line 'b'. # Finding the perpendiculars of a function ### Algebra In algebra, for any linear equation y=mx + b, the perpendiculars will all have a slope of (-1/m), the opposite reciprocal of the original slope. It is helpful to memorize the slogan "to find the slope of the perpendicular line, flip the fraction and change the sign." Recall that any whole number a is itself over one, and can be written as (a/1) To find the perpendicular of a given line which also passes through a particular point (x, y), solve the equation y = (-1/m)x + b, substituting in the known values of m, x, and y to solve for b. ### Calculus First find the derivative of the function. This will be the slope (m) of any curve at a particular point (x, y). Then, as above, solve the equation y = (-1/m)x + b, substituting in the known values of m, x, and y to solve for b.
https://www.wikidoc.org/index.php/Perpendicular
a0517e27b99795b323104608e677663e6083356a
wikidoc
Perseveration
Perseveration # Overview Perseveration is defined as uncontrollable repetition of a particular response, such as a word, phrase, or gesture, despite the absence or cessation of a stimulus, usually caused by brain injury or other organic disorder. If an issue has been fully breached and discussed to a point of resolution it is not uncommon for something to trigger the re-investigation of the matter. This can happen at any time during a conversation. This is particularly true with those who have had traumatic brain injuries. Those with Asperger's syndrome also display a form of perseveration in that they focus on one or a number of narrow interests. A person with Asperger's might go to a department store repeatedly to look at air conditioners. Several researchers have tried to connect perseveration with a lack of inhibition; however, this connection could not be found, or was small.
Perseveration Editor-In-Chief: C. Michael Gibson, M.S., M.D. [3] # Overview Perseveration is defined as uncontrollable repetition of a particular response, such as a word, phrase, or gesture, despite the absence or cessation of a stimulus, usually caused by brain injury or other organic disorder. If an issue has been fully breached and discussed to a point of resolution it is not uncommon for something to trigger the re-investigation of the matter. This can happen at any time during a conversation. This is particularly true with those who have had traumatic brain injuries. Those with Asperger's syndrome also display a form of perseveration in that they focus on one or a number of narrow interests. A person with Asperger's might go to a department store repeatedly to look at air conditioners. Several researchers have tried to connect perseveration with a lack of inhibition; however, this connection could not be found, or was small.[1][2]
https://www.wikidoc.org/index.php/Perseveration
a4ba98cdb62840870425da12837296e577fef182
wikidoc
Peter Breggin
Peter Breggin Peter R. Breggin is a controversial American psychiatrist, best known as a leader of Anti-psychiatry movement. He is a critic of biological psychiatry and psychiatric medication, and as the author of books such as Toxic Psychiatry, Talking Back to Prozac, Talking Back to Ritalin, and Brain-Disabling Treatments in Psychiatry. # Early career and background Breggin's background includes Harvard College, Case Western Reserve Medical School, a teaching fellowship at Harvard Medical School, a two-year staff appointment to the National Institute of Mental Health (NIMH), and a faculty appointment to the Johns Hopkins University Department of Counseling. Breggin has been in practice since 1968. # Founder of Psychiatric Journal and Organization In 1971, Dr. Breggin founded the International Center for the Study of Psychiatry and Psychology (ICSPP), a nonprofit research and educational network. The Center is dedicated to shedding light upon the impact of mental health theory and practices upon individual well-being, personal freedom, and family and community values. In 2002 he also founded the peer-review journal, "Ethical Human Sciences and Services", renamed as "Ethical Human Psychology and Psychiatry". This journal "is the official journal of the International Center for the Study of Psychiatry". The stated goal of the publication is to, "raise the level of scientific knowledge and ethical discourse, while empowering professionals who are devoted to principled human sciences and services unsullied by professional and economic interests". # Critic of conventional psychiatry Dr. Breggin concentrates on the iatrogenic effects (negative side effects) of psychiatric medications, arguing that the impact of negative side effects typically outweighs any benefit. Breggin also argues that psychosocial interventions are almost always superior in treating mental illness. He stated; "I don't believe in the psychiatric drugs myself. I've been in practice since 1968, and I've never started anyone on psychiatric drugs". For over three decades, he has campaigned against psychoactive drugs, electroshock, psychosurgery, coercive involuntary treatment, and biological theories of psychiatry. According to Dr. Breggin, the pharmaceutical industry propagates disinformation which is accepted by unsuspecting doctors, "The psychiatrist accepts the bad science that establishes the existence of all these mental diseases in the first place. From there it’s just a walk down the street to all the drugs as remedies". He points out problems with conflicts-of-interest (such as the financial relationships between drug companies, researchers, and the American Psychiatric Association). Breggin states psychiatric drugs, "...are all, every class of them, highly dangerous". He asserts: "If neuroleptics were used to treat anyone other than mental patients, they would have been banned a long time ago. If their use wasn't supported by powerful interest groups, such as the pharmaceutical industry and organized psychiatry, they would be rarely used at all. Meanwhile, the neuroleptics have produced the worst epidemic of neurological disease in history. At the least, their use should be severely curtailed." In a recent book, Reclaiming Our Children, he calls for the ethical treatment of children and argues that our society's mistreatment of children is a national tragedy (including the role of sexual, physical, and emotional abuse). He also objects to prescribing psychiatric medications to preschoolers, stating that this is risky and potentially harmful to their developing brains and nervous systems. # Criticism of ADHD and Ritalin The New York Times has labeled Dr. Breggin as the nation's best-known ADHD critic. As early as 1991 he coined the acronym DADD, stating, "...most so-called ADHD children are not receiving sufficient attention from their fathers who are separated from the family, too preoccupied with work and other things, or otherwise impaired in their ability to parent. In many cases the appropriate diagnosis is Dad Attention Deficit Disorder (DADD)". Breggin his written two books specifically on the topic entitled, Talking Back to Ritalin and The Ritalin Factbook. In these books he has made some controversial claims such as, "Ritalin "works" by producing malfunctions in the brain rather than by improving brain function. This is the only way it works". Forbes credited Breggin with "almost single-handedly reenergizing the anti-Ritalin contingent", which lead to a "flurry of lawsuits and news stories".Breggin also testified to Congress with Fred Baughman. In Congress Dr. Breggin claimed "that there were no scientific studies validating ADHD, that all these kids needed was "discipline and better instruction", and that therapeutic stimulants "are the most addictive drugs known in medicine today". PBS Frontline also did a five part TV series entitled 'Medicating Kids', which was specifically about ADHD. Fred Baughman and Dr. Breggin were the major critics used in this series. In an interview during this time period he referred to ADHD as a "fiction". This increased critical attention to Ritalin culminated with the Ritalin class action lawsuits against Novartis, the American Psychiatric Association (APA), and CHADD in which the plaintiffs sued for fraud. Specifically, they charged that the defendants had conspired to invent and promote the disorder ADHD to create a highly profitable market for the drug Ritalin. At the time, these cases were considered "the next tobacco" and garnered national media attention. Dr. Breggin was the medical consultant for several of the class action lawsuits. All five lawsuits were dismissed or withdrawn before they went to trial. # Criticism of SSRI antidepressants In the early 1990s, Dr. Breggin pointed out the problems with research methodology in the research of SSRI antidepressants. In 2005, the FDA began requiring "black-box" warnings on SSRIs, warning of an association between SSRI use and suicidal behavior in children. In 2006, the FDA expanded the warnings to include adults taking Paxil (which is associated with a higher risk of suicidal behavior as compared to placebo). These policy actions were taken approximately 15 years after Dr. Breggin first wrote about the subject. Dr. Breggin believes his contributions have gone uncredited. In contrast to Breggin's early work on Prozac, which was largely ignored, Prozac Backlash, a critique of SSRIs by Harvard psychiatrist Joseph Glenmullen, was widely praised by high-profile media sources. This was addressed by Dr.Breggin in a subsequent book, The Antidepressant Fact Book: Glenmullen has never countered Breggin's assertion and they both presented at the annual conference (in Queens, NY in 2004) of the International Center for the Study of Psychiatry and Psychology. # Criticism of ECT Dr. Breggin has written several books critical of electroconvulsive therapy. He is quoted by Time Magazine as stating, "...the damage produces delirium so severe that patients can't fully experience depression or other higher mental functions during the several weeks after electroshock". # Controversial commentary Due to his outspoken criticisms of many aspects of psychiatry, Dr. Breggin has become a controversial figure regularly at odds with the mainstream mental health establishment. He uses terms like "fraud" to describe mental disorders, the medication used to treat these disorders, and the political process that determines the labels used for diagnosing mental disorders. He has also consistently warned about conflict of interest problems. These claims often challenge accepted standards of care within the mental health field and have led to highly critical rebuttals. In 1994, the president of the American Psychiatric Association called Breggin a "flat-earther" (suggesting he embraced outdated theories); the head of the National Alliance on Mental Illness (NAMI) called Breggin "ignorant"; and the former head of the National Institute of Mental Health called him an "outlaw." Although he regularly critiques and has written reviews of the scientific literature, Dr. Breggin has not published controlled, independent peer reviewed research to substantiate his claims. He has been accused, by critics, of cherry picking information from the research of others to draw unrelated conclusions. Stephen Barrett of Quackwatch, a retired psychiatrist and critic of Breggin, has stated; "he would like you to believe that his clinical experiences and investigations have enabled him to reach a level of insight that is greater than that of the majority of mental health professionals". Russell Barkley, an expert in ADHD, has also expressed reservations about Breggin's ideas. "...the flaws of both his research methods and his arguments are evident to any scientist even slightly familiar with the scientific literature". In 1987, NAMI brought a lawsuit against Dr. Breggin. They were upset about remarks he made on the Oprah Winfrey Show on April 2, 1987. He stated that mental health clients should judge their clinicians in terms of their empathy and support; if they tried to prescribe drugs during the first session, he advised such clients to seek assistance elsewhere. He also pointed out the iatrogenic effects of neuroleptic drugs. He was defended by a diverse group of psychiatrists and others who defended his right to publicly state his critical opinion. Breggin was cleared of any wrongdoing by the Maryland medical board. Time magazine has noted that other mental health professionals worry that "Breggin reinforces the myth that mental illness is not real, that you wouldn't be ill if you'd pull yourself up by the bootstraps...his views stop people from getting treatment. They could cost a life." # Expert witness Dr. Breggin has had a mixed record in the court system. He has been involved in cases that won large verdicts for patients disabled by the iatrogenic effects of psychiatric drugs as well as having his testimony accepted in criminal trials regarding the iatrogenic effects of antidepressant medications. Breggin testified as an expert witness in the Wesbecker case (Fentress et al., 1994), a lawsuit against Eli Lilly, makers of Prozac. Ultimately, the jury found for Eli Lilly. It was later revealed that the plaintiffs and defendants had secretly settled behind closed doors. Breggin alleges that pharmaceutical manufacturers have committed ad hominem attacks upon him in the form of linking him to Scientology campaigns against psychiatric drugs. In particular, Breggin levels this accusation against Eli Lilly. Breggin acknowledges that he did work with Scientology starting in 1972, but states that by 1974 he "found Template:Interpolation opposed to Scientology's values, agenda, and tactics", and in consequence "stopped all cooperative efforts in 1974 and publicly declared Template:Interpolation criticism of the group in a letter published in Reason." Breggin has also stated that he has personal reasons to dislike Scientology since his wife, Ginger, was once a member. Some judges have questioned Breggin's credibility in some cases where he was called as an expert witness. For example, a Maryland judge in a medical malpractice case in 1995 said, "I believe that his bias in this case is blinding. . . he was mistaken in a lot of the factual basis for which he expressed his opinion". In that same year a Virginia judge excluded Breggin's testimony stating, "This court finds that the evidence of Peter Breggin, as a purported expert, fails nearly all particulars under the standard set forth in Daubert and its progeny. . . Simply put, the Court believes that Dr. Breggin's opinions do not rise to the level of an opinion based on 'good science'". In 2002, Dr. Breggin was hired as an expert witness by a survivor of the Columbine High School massacre in a case against the makers of an anti-depressant drug. In his report, Dr. Breggin failed to mention the Columbine incident or one of the killers, instead focusing on the medication taken by the other, "...Eric Harris was suffering from a substance induced (Luvox-induced) mood disorder with depressive and manic features that had reached a psychotic level of violence and suicide. Absent persistent exposure to Luvox, Eric Harris probably would not have committed violence and suicide". . However, according to The Denver Post, the judge of the case..."was visibly angry that the experts failed to view evidence prior to their depositions" even though they had months to do so. The evidence would have included hundreds of documents including a significant amount of video and audio tape that the killers had recorded. The judge stated,"..lawyers will be free to attack them on the basis of the evidence they haven't seen and haven't factored into their opinions". . The lawsuit was eventually dropped with the stipulation that the makers of Luvox donate $10,000 to the American Cancer Society. In 2005, a court disqualified the testimony of Breggin because it did not meet the scientific rigor established by the Frye Standard. The judge stated "...Breggin spends 14 pages critiquing the treatment provided not because it ran counter to the acceptable standards of care, but because it ran counter to Breggin’s personal ideas and ideologies of what the standards ought to be.” # Publishing and research Since 1964 he has published on his major topic of interest, clinical psychopharmacology, and has authored dozens of other articles and nineteen books. Many of Breggin's more recent articles are published in the peer-reviewed journal he founded, Ethical Human Sciences and Services, and in the International Journal of Risk and Safety in Medicine. Many of his published works deal with psychiatric medication, the FDA and drug approval process, the evaluation of clinical trials, and standards of care in psychiatry and related fields. Breggin does not accept any money from pharmaceutical companies. Breggin now lives and practices in Ithaca, New York, where he treats children, adults and families. # Bibliography - Toxic Psychiatry: Why Therapy, Empathy and Love Must Replace the Drugs, Electroshock, and Biochemical Theories of the "New Psychiatry" (1994) - Beyond Conflict: From Self-Help and Psychotherapy to Peacemaking (1995) - "Talking Back To Prozac: What Doctors Aren't Telling You About Today's Most Controversial Drug (1995) - Your Drug May Be Your Problem: How and Why to Stop Taking Psychiatric Medications (with David Cohen) (2000) - The Anti-Depressant Fact Book: What Your Doctor Won't Tell You About Prozac, Zoloft, Paxil, Celexa, and Luvox (2001) - Talking Back to Ritalin: What Doctors Aren't Telling You About Stimulants and ADHD (Forward by Dick Scruggs) (2001) - Reclaiming Our Children: A Healing Solution for a Nation in Crisis (2001) - The Heart of Being Helpful: Empathy And the Creation of a Healing Presence (2006) - The Ritalin Fact Book: What Your Doctor Won't Tell You (2006) # Selected scholarly works - Breggin, P.R. (2006). Court filing makes public my previously suppressed analysis of Paxil's effects. Ethical Human Psychology and Psychiatry, 8, 77-84. - Breggin, P.R. (2006). Recent regulatory changes in antidepressant labels: Implications of activation (stimulation) for clinical practice. Primary Psychiatry, 13(1), 57-60. - Breggin, P.R. (2004). Recent U.S., Canadian and British regulatory agency actions concerning antidepressant-induced harm to self and others: A review and analysis. International Journal of Risk and Safety in Medicine,16, 247-259. - Breggin, P.R. (2003). Suicidality, violence and mania caused by selective serotonin reuptake inhibitors (SSRIs): A review and analysis. International Journal of Risk and Safety in Medicine, 16, 31-49. - Breggin, P.R. (2000). Psychopharmacology and human values. Journal of Humanistic Psychology, 43, 34-49. - Breggin, P.R. (2000). The psychiatric drugging of toddlers. Ethical Human Sciences and Services, 2(2), 83-86. - Breggin, P.R. (2000). The NIMH multimodal study of treatment for Attention-Deficit/Hyperactivity Disorder: A critical analysis. International Journal of Risk and Safety in Medicine, 13,15-22. - Breggin, P.R. (2001). From Prozac to Ecstasy: The implications of new evidence for drug-induced brain damage. Ethical Human Sciences and Services, 3(1), 3-5. - Breggin, P.R. (2000). What psychologists and psychotherapists need to know about ADHD and stimulants. Changes: An International Journal of Psychology and Psychotherapy,18,13-23. - Breggin, P.R. (1999). Psychostimulants in the treatment of children diagnosed with ADHD: Risks and mechanism of action. International Journal of Risk and Safety in Medicine, 12, 3-35. - Breggin, P.R. (1998). Psychotherapy in emotional crises without resort to psychiatric medication. The Humanistic Psychologist, 25, 2-14. - Breggin, P.R. (1998). Analysis of adverse behavioral effects of benzodiazepines with a discussion on drawing scientific conclusions from the FDA's spontaneous reporting system. Journal of Mind and Behavior, 19(1), 21-50. - Breggin, P.R. (1994). Should the use of neuroleptics be severely limited? Controversial Issues in Mental Health, edited by S.A. Kirk and S.D. Einbinder, pp. 146-152. - Breggin, P.R. (1990). Brain damage, dementia and persistent cognitive dysfunction associated with neuroleptic drugs: Evidence, etiology, implications. Journal of Mind and Behavior, 11, 425-464. - Breggin, P.R. (1986). Neuropathology and cognitive dysfunction From ECT (Electroconvulsive/"shock" therapy). Psychopharmacology Bulletin , 22, 476-479. - Breggin, P.R. (1982). The return of lobotomy and psychosurgery. Reprinted in R.B. Edwards (ed.): Psychiatry and Ethics. Buffalo, Prometheus Books, 1982. Published earlier in Quality of Health Care-Human Experimentation: Hearings Before Senator Edward Kennedy's Subcommittee on Health, U.S. Senate, Washington, D.C., US Government Printing Office, 1973. - Breggin, P.R. (1982). Coercion of voluntary patients in an open hospital. In R.B. Edwards(ed): Psychiatry and Ethics. Prometheus Books, 1982. Reprinted from Breggin, P.R. (1964). Archives of General Psychiatry, 10, 173-181. - Breggin, P.R. (1980). Brain-disabling therapies. In E. Valenstein (ed.), The Psychosurgery Debate, W.H. Freeman, San Francisco, CA, 1980. - Breggin, P.R. (1975). Psychosurgery for the Control of violence: A critical review. In W. Fields and W. Sweet (eds.), Neural Bases of Violence and Aggression, Warren H. Green, Inc., St. Louis, MO, 350-378, 1975. - Breggin, P.R. (1971). Psychotherapy as applied ethics. Psychiatry, 34, 59-75.
Peter Breggin Peter R. Breggin is a controversial American psychiatrist, best known as a leader of Anti-psychiatry movement. He is a critic of biological psychiatry and psychiatric medication, and as the author of books such as Toxic Psychiatry, Talking Back to Prozac, Talking Back to Ritalin, and Brain-Disabling Treatments in Psychiatry. # Early career and background Breggin's background includes Harvard College, Case Western Reserve Medical School, a teaching fellowship at Harvard Medical School, a two-year staff appointment to the National Institute of Mental Health (NIMH), and a faculty appointment to the Johns Hopkins University Department of Counseling. Breggin has been in practice since 1968. # Founder of Psychiatric Journal and Organization In 1971, Dr. Breggin founded the International Center for the Study of Psychiatry and Psychology (ICSPP), a nonprofit research and educational network. The Center is dedicated to shedding light upon the impact of mental health theory and practices upon individual well-being, personal freedom, and family and community values. In 2002 he also founded the peer-review journal, "Ethical Human Sciences and Services", renamed as "Ethical Human Psychology and Psychiatry". This journal "is the official journal of the International Center for the Study of Psychiatry".[1] The stated goal of the publication is to, "raise the level of scientific knowledge and ethical discourse, while empowering professionals who are devoted to principled human sciences and services unsullied by professional and economic interests".[2] # Critic of conventional psychiatry Dr. Breggin concentrates on the iatrogenic effects (negative side effects) of psychiatric medications, arguing that the impact of negative side effects typically outweighs any benefit. Breggin also argues that psychosocial interventions are almost always superior in treating mental illness. He stated; "I don't believe in the psychiatric drugs myself. I've been in practice since 1968, and I've never started anyone on psychiatric drugs".[3] For over three decades, he has campaigned against psychoactive drugs, electroshock, psychosurgery, coercive involuntary treatment, and biological theories of psychiatry. According to Dr. Breggin, the pharmaceutical industry propagates disinformation which is accepted by unsuspecting doctors, "The psychiatrist accepts the bad science that establishes the existence of all these mental diseases in the first place. From there it’s just a walk down the street to all the drugs as remedies". He points out problems with conflicts-of-interest (such as the financial relationships between drug companies, researchers, and the American Psychiatric Association). Breggin states psychiatric drugs, "...are all, every class of them, highly dangerous". He asserts: "If neuroleptics were used to treat anyone other than mental patients, they would have been banned a long time ago. If their use wasn't supported by powerful interest groups, such as the pharmaceutical industry and organized psychiatry, they would be rarely used at all. Meanwhile, the neuroleptics have produced the worst epidemic of neurological disease in history. At the least, their use should be severely curtailed."[3] In a recent book, Reclaiming Our Children, he calls for the ethical treatment of children and argues that our society's mistreatment of children is a national tragedy (including the role of sexual, physical, and emotional abuse). He also objects to prescribing psychiatric medications to preschoolers, stating that this is risky and potentially harmful to their developing brains and nervous systems.[4] # Criticism of ADHD and Ritalin The New York Times has labeled Dr. Breggin as the nation's best-known ADHD critic. As early as 1991 he coined the acronym DADD, stating, "...most so-called ADHD children are not receiving sufficient attention from their fathers who are separated from the family, too preoccupied with work and other things, or otherwise impaired in their ability to parent. In many cases the appropriate diagnosis is Dad Attention Deficit Disorder (DADD)". Breggin his written two books specifically on the topic entitled, Talking Back to Ritalin and The Ritalin Factbook. In these books he has made some controversial claims such as, "Ritalin "works" by producing malfunctions in the brain rather than by improving brain function. This is the only way it works".[5] Forbes credited Breggin with "almost single-handedly reenergizing the anti-Ritalin contingent", which lead to a "flurry of lawsuits and news stories".[6]Breggin also testified to Congress with Fred Baughman. In Congress Dr. Breggin claimed "that there were no scientific studies validating ADHD, that all these kids needed was "discipline and better instruction", and that therapeutic stimulants "are the most addictive drugs known in medicine today".[7] PBS Frontline also did a five part TV series entitled 'Medicating Kids', which was specifically about ADHD. Fred Baughman and Dr. Breggin were the major critics used in this series.[8] In an interview during this time period he referred to ADHD as a "fiction". This increased critical attention to Ritalin culminated with the Ritalin class action lawsuits against Novartis, the American Psychiatric Association (APA), and CHADD in which the plaintiffs sued for fraud. Specifically, they charged that the defendants had conspired to invent and promote the disorder ADHD to create a highly profitable market for the drug Ritalin. At the time, these cases were considered "the next tobacco" and garnered national media attention.[9] Dr. Breggin was the medical consultant for several of the class action lawsuits. All five lawsuits were dismissed or withdrawn before they went to trial. # Criticism of SSRI antidepressants In the early 1990s, Dr. Breggin pointed out the problems with research methodology in the research of SSRI antidepressants. In 2005, the FDA began requiring "black-box" warnings on SSRIs, warning of an association between SSRI use and suicidal behavior in children.[10] In 2006, the FDA expanded the warnings to include adults taking Paxil (which is associated with a higher risk of suicidal behavior as compared to placebo[11]). These policy actions were taken approximately 15 years after Dr. Breggin first wrote about the subject. Dr. Breggin believes his contributions have gone uncredited. In contrast to Breggin's early work on Prozac, which was largely ignored, Prozac Backlash, a critique of SSRIs by Harvard psychiatrist Joseph Glenmullen, was widely praised by high-profile media sources.[12] This was addressed by Dr.Breggin in a subsequent book, The Antidepressant Fact Book: Glenmullen has never countered Breggin's assertion and they both presented at the annual conference (in Queens, NY in 2004) of the International Center for the Study of Psychiatry and Psychology. # Criticism of ECT Dr. Breggin has written several books critical of electroconvulsive therapy. He is quoted by Time Magazine as stating, "...the damage produces delirium so severe that patients can't fully experience depression or other higher mental functions during the several weeks after electroshock". # Controversial commentary Due to his outspoken criticisms of many aspects of psychiatry, Dr. Breggin has become a controversial figure regularly at odds with the mainstream mental health establishment. He uses terms like "fraud" to describe mental disorders, the medication used to treat these disorders, and the political process that determines the labels used for diagnosing mental disorders. He has also consistently warned about conflict of interest problems. [14] These claims often challenge accepted standards of care within the mental health field and have led to highly critical rebuttals.[15] In 1994, the president of the American Psychiatric Association called Breggin a "flat-earther" (suggesting he embraced outdated theories); the head of the National Alliance on Mental Illness (NAMI) called Breggin "ignorant"; and the former head of the National Institute of Mental Health called him an "outlaw."[16] Although he regularly critiques [17] and has written reviews [18] of the scientific literature, Dr. Breggin has not published controlled, independent peer reviewed research to substantiate his claims. He has been accused, by critics, of cherry picking information from the research of others to draw unrelated conclusions.[19] Stephen Barrett of Quackwatch, a retired psychiatrist and critic of Breggin, has stated; "he would like you to believe that his clinical experiences and investigations have enabled him to reach a level of insight that is greater than that of the majority of mental health professionals".[20] Russell Barkley, an expert in ADHD, has also expressed reservations about Breggin's ideas. "...the flaws of both his research methods and his arguments are evident to any scientist even slightly familiar with the scientific literature".[21] In 1987, NAMI brought a lawsuit against Dr. Breggin. They were upset about remarks he made on the Oprah Winfrey Show on April 2, 1987. He stated that mental health clients should judge their clinicians in terms of their empathy and support; if they tried to prescribe drugs during the first session, he advised such clients to seek assistance elsewhere. He also pointed out the iatrogenic effects of neuroleptic drugs. He was defended by a diverse group of psychiatrists and others who defended his right to publicly state his critical opinion.[22] Breggin was cleared of any wrongdoing by the Maryland medical board.[23] Time magazine has noted that other mental health professionals worry that "Breggin reinforces the myth that mental illness is not real, that you wouldn't be ill if you'd pull yourself up by the bootstraps...his views stop people from getting treatment. They could cost a life."[24] # Expert witness Dr. Breggin has had a mixed record in the court system. He has been involved in cases that won large verdicts for patients disabled by the iatrogenic effects of psychiatric drugs[25][26][27][28] as well as having his testimony accepted in criminal trials regarding the iatrogenic effects of antidepressant medications.[29] Breggin testified as an expert witness in the Wesbecker case (Fentress et al., 1994), a lawsuit against Eli Lilly, makers of Prozac. Ultimately, the jury found for Eli Lilly. It was later revealed that the plaintiffs and defendants had secretly settled behind closed doors. [30][31] Breggin alleges that pharmaceutical manufacturers have committed ad hominem attacks upon him in the form of linking him to Scientology campaigns against psychiatric drugs. In particular, Breggin levels this accusation against Eli Lilly. Breggin acknowledges that he did work with Scientology starting in 1972, but states that by 1974 he "found Template:Interpolation opposed to Scientology's values, agenda, and tactics", and in consequence "stopped all cooperative efforts in 1974 and publicly declared Template:Interpolation criticism of the group in a letter published in Reason." [32] Breggin has also stated that he has personal reasons to dislike Scientology since his wife, Ginger, was once a member. [32] [14] Some judges have questioned Breggin's credibility in some cases where he was called as an expert witness. For example, a Maryland judge in a medical malpractice case in 1995 said, "I believe that his bias in this case is blinding. . . he was mistaken in a lot of the factual basis for which he expressed his opinion". In that same year a Virginia judge excluded Breggin's testimony stating, "This court finds that the evidence of Peter Breggin, as a purported expert, fails nearly all particulars under the standard set forth in Daubert and its progeny. . . Simply put, the Court believes that Dr. Breggin's opinions do not rise to the level of an opinion based on 'good science'". In 2002, Dr. Breggin was hired as an expert witness by a survivor of the Columbine High School massacre in a case against the makers of an anti-depressant drug. In his report, Dr. Breggin failed to mention the Columbine incident or one of the killers, instead focusing on the medication taken by the other, "...Eric Harris was suffering from a substance induced (Luvox-induced) mood disorder with depressive and manic features that had reached a psychotic level of violence and suicide. Absent persistent exposure to Luvox, Eric Harris probably would not have committed violence and suicide". [33]. However, according to The Denver Post, the judge of the case..."was visibly angry that the experts failed to view evidence prior to their depositions" even though they had months to do so. The evidence would have included hundreds of documents including a significant amount of video and audio tape that the killers had recorded. The judge stated,"..lawyers will be free to attack them on the basis of the evidence they haven't seen and haven't factored into their opinions". [34]. The lawsuit was eventually dropped with the stipulation that the makers of Luvox donate $10,000 to the American Cancer Society.[35] In 2005, a court disqualified the testimony of Breggin because it did not meet the scientific rigor established by the Frye Standard. The judge stated "...Breggin spends 14 pages critiquing the treatment provided not because it ran counter to the acceptable standards of care, but because it ran counter to Breggin’s personal ideas and ideologies of what the standards ought to be.” [36] [37] # Publishing and research Since 1964 he has published on his major topic of interest, clinical psychopharmacology, and has authored dozens of other articles and nineteen books. Many of Breggin's more recent articles are published in the peer-reviewed journal he founded, Ethical Human Sciences and Services, and in the International Journal of Risk and Safety in Medicine. Many of his published works deal with psychiatric medication, the FDA and drug approval process, the evaluation of clinical trials, and standards of care in psychiatry and related fields. Breggin does not accept any money from pharmaceutical companies.[citation needed] Breggin now lives and practices in Ithaca, New York, where he treats children, adults and families. # Bibliography - Toxic Psychiatry: Why Therapy, Empathy and Love Must Replace the Drugs, Electroshock, and Biochemical Theories of the "New Psychiatry" (1994) - Beyond Conflict: From Self-Help and Psychotherapy to Peacemaking (1995) - "Talking Back To Prozac: What Doctors Aren't Telling You About Today's Most Controversial Drug (1995) - Your Drug May Be Your Problem: How and Why to Stop Taking Psychiatric Medications (with David Cohen) (2000) - The Anti-Depressant Fact Book: What Your Doctor Won't Tell You About Prozac, Zoloft, Paxil, Celexa, and Luvox (2001) - Talking Back to Ritalin: What Doctors Aren't Telling You About Stimulants and ADHD (Forward by Dick Scruggs) (2001) - Reclaiming Our Children: A Healing Solution for a Nation in Crisis (2001) - The Heart of Being Helpful: Empathy And the Creation of a Healing Presence (2006) - The Ritalin Fact Book: What Your Doctor Won't Tell You (2006) # Selected scholarly works - Breggin, P.R. (2006). Court filing makes public my previously suppressed analysis of Paxil's effects. Ethical Human Psychology and Psychiatry, 8, 77-84. - Breggin, P.R. (2006). Recent regulatory changes in antidepressant labels: Implications of activation (stimulation) for clinical practice. Primary Psychiatry, 13(1), 57-60. - Breggin, P.R. (2004). Recent U.S., Canadian and British regulatory agency actions concerning antidepressant-induced harm to self and others: A review and analysis. International Journal of Risk and Safety in Medicine,16, 247-259. - Breggin, P.R. (2003). Suicidality, violence and mania caused by selective serotonin reuptake inhibitors (SSRIs): A review and analysis. International Journal of Risk and Safety in Medicine, 16, 31-49. - Breggin, P.R. (2000). Psychopharmacology and human values. Journal of Humanistic Psychology, 43, 34-49. - Breggin, P.R. (2000). The psychiatric drugging of toddlers. Ethical Human Sciences and Services, 2(2), 83-86. - Breggin, P.R. (2000). The NIMH multimodal study of treatment for Attention-Deficit/Hyperactivity Disorder: A critical analysis. International Journal of Risk and Safety in Medicine, 13,15-22. - Breggin, P.R. (2001). From Prozac to Ecstasy: The implications of new evidence for drug-induced brain damage. Ethical Human Sciences and Services, 3(1), 3-5. - Breggin, P.R. (2000). What psychologists and psychotherapists need to know about ADHD and stimulants. Changes: An International Journal of Psychology and Psychotherapy,18,13-23. - Breggin, P.R. (1999). Psychostimulants in the treatment of children diagnosed with ADHD: Risks and mechanism of action. International Journal of Risk and Safety in Medicine, 12, 3-35. - Breggin, P.R. (1998). Psychotherapy in emotional crises without resort to psychiatric medication. The Humanistic Psychologist, 25, 2-14. - Breggin, P.R. (1998). Analysis of adverse behavioral effects of benzodiazepines with a discussion on drawing scientific conclusions from the FDA's spontaneous reporting system. Journal of Mind and Behavior, 19(1), 21-50. - Breggin, P.R. (1994). Should the use of neuroleptics be severely limited? Controversial Issues in Mental Health, edited by S.A. Kirk and S.D. Einbinder, pp. 146-152. - Breggin, P.R. (1990). Brain damage, dementia and persistent cognitive dysfunction associated with neuroleptic drugs: Evidence, etiology, implications. Journal of Mind and Behavior, 11, 425-464. - Breggin, P.R. (1986). Neuropathology and cognitive dysfunction From ECT (Electroconvulsive/"shock" therapy). Psychopharmacology Bulletin , 22, 476-479. - Breggin, P.R. (1982). The return of lobotomy and psychosurgery. Reprinted in R.B. Edwards (ed.): Psychiatry and Ethics. Buffalo, Prometheus Books, 1982. Published earlier in Quality of Health Care-Human Experimentation: Hearings Before Senator Edward Kennedy's Subcommittee on Health, U.S. Senate, Washington, D.C., US Government Printing Office, 1973. - Breggin, P.R. (1982). Coercion of voluntary patients in an open hospital. In R.B. Edwards(ed): Psychiatry and Ethics. Prometheus Books, 1982. Reprinted from Breggin, P.R. (1964). Archives of General Psychiatry, 10, 173-181. - Breggin, P.R. (1980). Brain-disabling therapies. In E. Valenstein (ed.), The Psychosurgery Debate, W.H. Freeman, San Francisco, CA, 1980. - Breggin, P.R. (1975). Psychosurgery for the Control of violence: A critical review. In W. Fields and W. Sweet (eds.), Neural Bases of Violence and Aggression, Warren H. Green, Inc., St. Louis, MO, 350-378, 1975. - Breggin, P.R. (1971). Psychotherapy as applied ethics. Psychiatry, 34, 59-75.
https://www.wikidoc.org/index.php/Peter_Breggin
3129c98a34c2c48861d86c13659880fa330615d8
wikidoc
Temporal bone
Temporal bone The temporal bones are situated at the sides and base of the skull. The temporal bone supports that part of the face known as the temple. # Parts Each consists of five parts: - Squama temporalis - Mastoid portion - Petrous portion - Tympanic part - Styloid process (temporal) # Composition The structure of the squama is like that of the other cranial bones: the mastoid portion is spongy, and the petrous portion dense and hard. # Diagnosis ## Physical Examination ### Ear Nose and Throat - Axial CT scan showing oblique left temporal bone fracture . - Hemotympanum (blood in the middle ear) causes a bluish discoloration of the drum . - Ruptured tympanic membrane and blood in the ear canal (surgeon's view). - Battle's Sign. Bluish discoloration of the post-auricular region, associated with temporal bone fractures . - Cerebrospinal fluid (CSF) otorrhea . - Oblique left temporal bone fracture line crossing the mastoid process, into Henle's spine and the external auditory canal (surgeon's view). # Additional images - The skull from the front. - Sphenoid bone visible center right. - Side view of the skull. - Left infratemporal fossa. - Sagittal section of skull. - Articulation of the mandible. Lateral aspect. - Base of the skull. Upper surface.
Temporal bone Template:Infobox Bone Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1] The temporal bones are situated at the sides and base of the skull. The temporal bone supports that part of the face known as the temple. # Parts Each consists of five parts: - Squama temporalis - Mastoid portion - Petrous portion - Tympanic part - Styloid process (temporal) # Composition The structure of the squama is like that of the other cranial bones: the mastoid portion is spongy, and the petrous portion dense and hard. # Diagnosis ## Physical Examination ### Ear Nose and Throat - Axial CT scan showing oblique left temporal bone fracture [1]. - Hemotympanum (blood in the middle ear) causes a bluish discoloration of the drum [2]. - Ruptured tympanic membrane and blood in the ear canal (surgeon's view)[3]. - Battle's Sign. Bluish discoloration of the post-auricular region, associated with temporal bone fractures [4]. - Cerebrospinal fluid (CSF) otorrhea [5]. - Oblique left temporal bone fracture line crossing the mastoid process, into Henle's spine and the external auditory canal (surgeon's view)[6]. # Additional images - The skull from the front. - Sphenoid bone visible center right. - Side view of the skull. - Left infratemporal fossa. - Sagittal section of skull. - Articulation of the mandible. Lateral aspect. - Base of the skull. Upper surface.
https://www.wikidoc.org/index.php/Petrous_temporal_bone
932c1a7bbf0283aa4f819ad9ab085925b3604f50
wikidoc
Phage therapy
Phage therapy Phage therapy is the therapeutic use of lytic bacteriophages to treat pathogenic bacterial infections. Bacteriophages, or "phages" are viruses that invade only bacterial cells and, in the case of lytic phages, cause the bacterium to burst and die, thus releasing more phages. Phage therapy is one of the viable alternatives to antibiotics, being developed for clinical use in the 21st century by many research groups in Europe and the US. After having been extensively used and developed mainly in former Soviet Union countries for about 90 years, phage therapy is now becoming more available in other countries such as USA for a variety of bacterial and poly-microbial biofilm infections. Phage therapy has many applications in human medicine as well as dentistry, veterinary science and agriculture. An important benefit of phage therapy is that bacteriophages can be much more specific than more common drugs, so can be chosen to be harmless to not only the host organism (human, animal or plant), but also other beneficial bacteria, such as gut flora, reducing chance for opportunistic infections. They also have few if any side effects as opposed to drugs, and do not stress the liver. Because they replicate in vivo, a single, small dose is sometimes sufficient. On the other hand this specificity is also a disadvantage, a phage will only kill a bacterium if it is a match to the specific subspecies; thus phage mixtures are often applied to improve the chances of success, or samples can be taken and an appropriate phage identified. Phages are currently being used therapeutically to treat bacterial infections that do not respond to conventional antibiotics. They tend to be more successful where there is a biofilm covered by a polysaccharide layer, that antibiotics typically cannot penetrate. Other biofilms include those on medical instruments, so an enzyme added to a phage can effectively and selectively wipe out even bacteria beneath these films, which is impossible currently in Western medicine. # History Following the discovery of bacteriophages by Frederick Twort and Felix d'Hérelle in 1915 and 1917, phage therapy was immediately recognized by many to be a key way forward for the eradication of bacterial infections. A Georgian, George Eliava, was making similar discoveries. He travelled to the Pasteur Institute in Paris where he met d'Hérelle, and in 1926 he founded an Institute in Tbilisi, Georgia devoted to the development of phage therapy. In neighbouring countries including Russia, extensive research and development soon began in this field. In the USA during the 1940s, commercialization of phage therapy was undertaken by the large pharmaceutical company, Eli Lilly. Whilst knowledge was being accumulated regarding the biology of phages and how to use phage cocktails correctly, early uses of phage therapy were often unreliable. When antibiotics were discovered in 1941 and marketed widely in the USA and Europe, Western scientists mostly lost interest in further use and study of phage therapy for some time. Isolated from Western advances in antibiotic production in the 1940s, Russian scientists continued to develop already successful phage therapy to treat the wounds of soldiers in field hospitals. During World War II, the Soviet Union used bacteriophages to treat many soldiers infected with various bacterial diseases e.g. dysentery and gangrene. The success rate was as good as, if not better than any antibiotic. Russian researchers continued to develop and to refine their treatments and to publish their research and results. However, due to the scientific barriers of the Cold War, this knowledge was not translated and did not proliferate across the world. There is an extensive library and research center at the Eliava Institute in Tbilisi, Georgia. Phage therapy is today a widespread form of treatment in neighbouring countries. For 80 years Georgian doctors have been treating local people including babies and newborns with phages. "Phages will kill bacteria completely but only if they are matched well." As a result of the development of antibiotic resistance since the 1950s and an advancement of scientific knowledge, there is renewed interest worldwide in the ability of phage therapy to eradicate bacterial infections and chronic polymicrobial biofilm, along with other strategies. Phages have been explored as means to eliminate pathogens like Campylobacter in raw food and Listeria in fresh food or to reduce food spoilage bacteria. In agricultural practice phages were used to fight pathogens like Campylobacter, Escherichia and Salmonella in farm animals, Lactococcus and Vibrio pathogens in fish from aquaculture and Erwinia and Xanthomonas in plants of agricultural importance. The oldest use was, however, in human medicine. Phages were used against diarrheal diseases caused by E. coli, Shigella or Vibrio and against wound infections caused by facultative pathogens of the skin like staphylococci and streptococci. Recently the phage therapy approach has been applied to systemic and even intracellular infections and the addition of non-replicating phage and isolated phage enzymes like lysins to the antimicrobial arsenal. However, definitive proof for the efficiency of these phage approaches in the field or the hospital is only provided in a few cases. # Benefits A clear benefit of phage therapy is that it does not have the potentially very severe adverse effects of antibiotics. Also it can be fast-acting, once the exact bacteria are identified and the phages administered. Another benefit of phage therapy is that although bacteria are able to develop resistance to phages the resistance is much easier to overcome. The reason behind this is that phages replicate and undergo natural selection and have probably been infecting bacteria since the beginning of life on this planet. Although bacteria evolve at a fast rate, so too will phages. Being smaller, they can mutate faster. Bacteria are most likely to modify the molecule that the phage targets, such as a cell surface glycoprotein, which is usually a bacterial receptor. In response to this modification, phages will evolve in such a way that counteracts this change, thus allowing them to continue targeting bacteria and causing cell lysis. As a consequence phage therapy is devoid of problems similar to antibiotic resistance. Bacteriophages are often very specific, targeting only one or a few strains of bacteria. Traditional antibiotics usually have more wide-ranging effect, killing both harmful bacteria and useful bacteria such as those facilitating food digestion. The specificity of bacteriophages reduces the chance that useful bacteria are killed when fighting an infection. Increasing evidence shows the ability of phages to travel to a required site — including the brain, where the blood brain barrier can be crossed — and multiply in the presence of an appropriate bacterial host, to combat infections such as meningitis. However the patient's immune system can, in some cases mount an immune response to the phage (2 out of 44 patients in a Polish trial ). Development and production is faster than antibiotics, on condition that the required recognition molecules are known. Research groups in the West are engineering a broader spectrum phage and also target MRSA treatments in a variety of forms - including impregnated wound dressings, preventative treatment for burn victims, phage-impregnated sutures. Enzobiotics are a new development at Rockefeller University that create enzymes from phage. These show potential for preventing secondary bacterial infections e.g. pneumonia developing with patients suffering from flu, otitis etc.. # Application ## Collection In its simplest form, phage treatment works by collecting local samples of water likely to contain high quantities of bacteria and bacteriophages, for example effluent outlets, sewage and other sources. They can also be extracted from corpses. The samples are taken and applied to the bacteria that are to be destroyed which have been cultured on growth medium. The bacteria usually die, and the mixture is centrifuged. The phages collect on the top of the mixture and can be drawn off. The phage solutions are then tested to see which ones show growth suppression effects (lysogeny) and/or destruction (lysis) of the target bacteria. The phage showing lysis are then amplified on cultures of the target bacteria, passed through a filter to remove all but the phages, then distributed. ## Treatment Phages are "bacterium specific" and it is therefore necessary in many cases to take a swab from the patient and culture it prior to treatment. Occasionally, isolation of therapeutic phages can typically require a few months to complete, but clinics generally keep supplies of phage cocktails for the most common bacterial strains in a geographical area. Phages in practice are applied orally, topically on infected wounds or spread onto surfaces, or used during surgical procedures. Injection is rarely used, avoiding any risks of trace chemical contaminants that may be present from the bacteria amplification stage,and recognizing that the immune system naturally fights against viruses introduced into the bloodstream or lymphatic system. In August 2006, the United States Food and Drug Administration approved spraying meat with phages. Although this initially raised concerns since without mandatory labeling consumers won't be aware that meat and poultry products have been treated with the spray, it confirms to the public that, for example, phages against Listeria are generally recognized as safe (GRAS status) within the worldwide scientific community and opens the way for other phages to also be recognized as having GRAS status. Phage therapy is used for the treatment of a variety of bacterial infections including: laryngitis, skin infections, dysentery, conjunctivitis, periodontitis, gingivitis, sinusitis, urinary tract infections and intestinal infections, burns, boils, etc. - also poly-microbial biofilms on chronic wounds, ulcers and infected surgical sites. In 2007, Phase 2 clinical trials are nearing completion in a London throat, nose and ear hospital for Pseudomonas aeruginosa infections (otitis). Phase 1 clinical trials are underway in the South West Regional Wound Care Center, Lubbock, Texas for an approved cocktail of phages, including P. aeruginosa, Staphylococcus aureus and Escherichia coli (better known as E. coli). Reviews of phage therapy indicate that more clinical and microbiological research is needed to meet current standards. ## Distribution Phages can usually be freeze dried and turned into pills without materially impacting efficacy. In pill form temperature stability up to 55C, and shelf lives of 14 months have been shown. Other forms of administration can include application in liquid form. These vials are usually best kept refrigerated. Oral administration works better when an antacid is included, as this increases the number of phages surviving passage through the stomach. Topical administration often involves application to gauzes that are laid on the area to be treated. # Obstacles ## General The host specificity of phage therapy may make it necessary for clinics to make different cocktails for treatment of the same infection or disease because the bacterial components of such diseases may differ from region to region or even person to person. Such a process would make it difficult for large scale production of phage therapy. Additionally, patent issues (specifically on living organisms) may complicate distribution for pharmaceutical companies wishing to have exclusive rights over their "invention"; making it unlikely that a for-profit corporation will invest capital in the widespread application of this technology. In addition, due to the specificity of individual phages, for a high chance of success, a mixture of phages is often applied. This means that 'banks' containing many different phages are needed to be kept and regularly updated with new phages, which makes regulatory testing for safety harder and more expensive. Some bacteria, for example clostridium and mycobacterium, have no known therapeutic phages available as yet. To work, the virus has to reach the site of the bacteria, and unlike antibiotics, viruses do not necessarily reach the same places that antibiotics can reach. Funding for phage therapy research and clinical trials is generally insufficient and difficult to obtain, since it is a lengthy and complex process to patent bacteriophage products. Scientists comment that 'the biggest hurdle is regulatory', whereas an official view is that individual phages would need proof individually because it would be too complicated to do as a combination, with many variables. CSLPublic awareness and education about phage therapy are generally limited to scientific or independent research rather than mainstream media. ## Safety Phage therapy is generally considered safe. As with antibiotic therapy and other methods of countering bacterial infections, endotoxins are released by the bacteria as they are destroyed within the patient (Herxheimer reaction). This can cause symptoms of fever. Care has to be taken in manufacture that the phage medium is free of bacterial fragments and endotoxins from the production process. Lysogenic bacteriophages are not generally used therapeutically. This group can act as a way for bacteria to exchange DNA, and this can help spread antibiotic resistance or even, theoretically, can make the bacteria pathogenic (see Cholera). The lytic bacteriophages available for phage therapy are best kept refrigerated but discarded if the pale yellow clear liquid goes cloudy.
Phage therapy Phage therapy is the therapeutic use of lytic bacteriophages to treat pathogenic bacterial infections. Bacteriophages, or "phages" are viruses that invade only bacterial cells and, in the case of lytic phages, cause the bacterium to burst and die, thus releasing more phages. Phage therapy is one of the viable alternatives to antibiotics, being developed for clinical use in the 21st century by many research groups in Europe and the US. After having been extensively used and developed mainly in former Soviet Union countries for about 90 years, phage therapy is now becoming more available in other countries such as USA for a variety of bacterial and poly-microbial biofilm infections.[1] Phage therapy has many applications in human medicine as well as dentistry, veterinary science and agriculture. An important benefit of phage therapy is that bacteriophages can be much more specific than more common drugs, so can be chosen to be harmless to not only the host organism (human, animal or plant), but also other beneficial bacteria, such as gut flora, reducing chance for opportunistic infections. They also have few if any side effects as opposed to drugs, and do not stress the liver. Because they replicate in vivo, a single, small dose is sometimes sufficient. On the other hand this specificity is also a disadvantage, a phage will only kill a bacterium if it is a match to the specific subspecies; thus phage mixtures are often applied to improve the chances of success, or samples can be taken and an appropriate phage identified. Phages are currently being used therapeutically to treat bacterial infections that do not respond to conventional antibiotics. They tend to be more successful where there is a biofilm covered by a polysaccharide layer, that antibiotics typically cannot penetrate. Other biofilms include those on medical instruments, so an enzyme added to a phage can effectively and selectively wipe out even bacteria beneath these films, which is impossible currently in Western medicine. [2] # History Following the discovery of bacteriophages by Frederick Twort and Felix d'Hérelle in 1915 and 1917, phage therapy was immediately recognized by many to be a key way forward for the eradication of bacterial infections. A Georgian, George Eliava, was making similar discoveries. He travelled to the Pasteur Institute in Paris where he met d'Hérelle, and in 1926 he founded an Institute in Tbilisi, Georgia devoted to the development of phage therapy. In neighbouring countries including Russia, extensive research and development soon began in this field. In the USA during the 1940s, commercialization of phage therapy was undertaken by the large pharmaceutical company, Eli Lilly. Whilst knowledge was being accumulated regarding the biology of phages and how to use phage cocktails correctly, early uses of phage therapy were often unreliable. When antibiotics were discovered in 1941 and marketed widely in the USA and Europe, Western scientists mostly lost interest in further use and study of phage therapy for some time. Isolated from Western advances in antibiotic production in the 1940s, Russian scientists continued to develop already successful phage therapy to treat the wounds of soldiers in field hospitals. During World War II, the Soviet Union used bacteriophages to treat many soldiers infected with various bacterial diseases e.g. dysentery and gangrene. The success rate was as good as, if not better than any antibiotic.[citation needed] Russian researchers continued to develop and to refine their treatments and to publish their research and results. However, due to the scientific barriers of the Cold War, this knowledge was not translated and did not proliferate across the world. There is an extensive library and research center at the Eliava Institute in Tbilisi, Georgia. Phage therapy is today a widespread form of treatment in neighbouring countries. For 80 years Georgian doctors have been treating local people including babies and newborns with phages. "Phages will kill bacteria completely but only if they are matched well."[3] As a result of the development of antibiotic resistance since the 1950s and an advancement of scientific knowledge, there is renewed interest worldwide in the ability of phage therapy to eradicate bacterial infections and chronic polymicrobial biofilm, along with other strategies. Phages have been explored as means to eliminate pathogens like Campylobacter in raw food and Listeria in fresh food or to reduce food spoilage bacteria.[4] In agricultural practice phages were used to fight pathogens like Campylobacter, Escherichia and Salmonella in farm animals, Lactococcus and Vibrio pathogens in fish from aquaculture and Erwinia and Xanthomonas in plants of agricultural importance. The oldest use was, however, in human medicine. Phages were used against diarrheal diseases caused by E. coli, Shigella or Vibrio and against wound infections caused by facultative pathogens of the skin like staphylococci and streptococci. Recently the phage therapy approach has been applied to systemic and even intracellular infections and the addition of non-replicating phage and isolated phage enzymes like lysins to the antimicrobial arsenal. However, definitive proof for the efficiency of these phage approaches in the field or the hospital is only provided in a few cases.[4] # Benefits A clear benefit of phage therapy is that it does not have the potentially very severe adverse effects of antibiotics. Also it can be fast-acting, once the exact bacteria are identified and the phages administered. Another benefit of phage therapy is that although bacteria are able to develop resistance to phages the resistance is much easier to overcome. The reason behind this is that phages replicate and undergo natural selection and have probably been infecting bacteria since the beginning of life on this planet. Although bacteria evolve at a fast rate, so too will phages. Being smaller, they can mutate faster. Bacteria are most likely to modify the molecule that the phage targets, such as a cell surface glycoprotein, which is usually a bacterial receptor. In response to this modification, phages will evolve in such a way that counteracts this change, thus allowing them to continue targeting bacteria and causing cell lysis. As a consequence phage therapy is devoid of problems similar to antibiotic resistance. Bacteriophages are often very specific, targeting only one or a few strains of bacteria. Traditional antibiotics usually have more wide-ranging effect, killing both harmful bacteria and useful bacteria such as those facilitating food digestion. The specificity of bacteriophages reduces the chance that useful bacteria are killed when fighting an infection. Increasing evidence shows the ability of phages to travel to a required site — including the brain, where the blood brain barrier can be crossed — and multiply in the presence of an appropriate bacterial host, to combat infections such as meningitis. However the patient's immune system can, in some cases mount an immune response to the phage (2 out of 44 patients in a Polish trial [5]). Development and production is faster than antibiotics, on condition that the required recognition molecules are known. Research groups in the West are engineering a broader spectrum phage and also target MRSA treatments in a variety of forms - including impregnated wound dressings, preventative treatment for burn victims, phage-impregnated sutures. Enzobiotics are a new development at Rockefeller University that create enzymes from phage. These show potential for preventing secondary bacterial infections e.g. pneumonia developing with patients suffering from flu, otitis etc.. # Application ## Collection In its simplest form, phage treatment works by collecting local samples of water likely to contain high quantities of bacteria and bacteriophages, for example effluent outlets, sewage and other sources. They can also be extracted from corpses. The samples are taken and applied to the bacteria that are to be destroyed which have been cultured on growth medium. The bacteria usually die, and the mixture is centrifuged. The phages collect on the top of the mixture and can be drawn off. The phage solutions are then tested to see which ones show growth suppression effects (lysogeny) and/or destruction (lysis) of the target bacteria. The phage showing lysis are then amplified on cultures of the target bacteria, passed through a filter to remove all but the phages, then distributed. ## Treatment Phages are "bacterium specific" and it is therefore necessary in many cases to take a swab from the patient and culture it prior to treatment. Occasionally, isolation of therapeutic phages can typically require a few months to complete, but clinics generally keep supplies of phage cocktails for the most common bacterial strains in a geographical area. Phages in practice are applied orally, topically on infected wounds or spread onto surfaces, or used during surgical procedures. Injection is rarely used, avoiding any risks of trace chemical contaminants that may be present from the bacteria amplification stage,and recognizing that the immune system naturally fights against viruses introduced into the bloodstream or lymphatic system. In August 2006, the United States Food and Drug Administration approved spraying meat with phages. Although this initially raised concerns since without mandatory labeling consumers won't be aware that meat and poultry products have been treated with the spray,[4] it confirms to the public that, for example, phages against Listeria are generally recognized as safe (GRAS status) within the worldwide scientific community and opens the way for other phages to also be recognized as having GRAS status. Phage therapy is used for the treatment of a variety of bacterial infections including: laryngitis, skin infections, dysentery, conjunctivitis, periodontitis, gingivitis, sinusitis, urinary tract infections and intestinal infections, burns, boils, etc. - also poly-microbial biofilms on chronic wounds, ulcers and infected surgical sites. In 2007, Phase 2 clinical trials are nearing completion in a London throat, nose and ear hospital for Pseudomonas aeruginosa infections (otitis). Phase 1 clinical trials are underway in the South West Regional Wound Care Center, Lubbock, Texas for an approved cocktail of phages, including P. aeruginosa, Staphylococcus aureus and Escherichia coli (better known as E. coli). Reviews of phage therapy indicate that more clinical and microbiological research is needed to meet current standards. [6] ## Distribution Phages can usually be freeze dried and turned into pills without materially impacting efficacy. In pill form temperature stability up to 55C, and shelf lives of 14 months have been shown. Other forms of administration can include application in liquid form. These vials are usually best kept refrigerated. Oral administration works better when an antacid is included, as this increases the number of phages surviving passage through the stomach. Topical administration often involves application to gauzes that are laid on the area to be treated. # Obstacles ## General The host specificity of phage therapy may make it necessary for clinics to make different cocktails for treatment of the same infection or disease because the bacterial components of such diseases may differ from region to region or even person to person. Such a process would make it difficult for large scale production of phage therapy. Additionally, patent issues (specifically on living organisms) may complicate distribution for pharmaceutical companies wishing to have exclusive rights over their "invention"; making it unlikely that a for-profit corporation will invest capital in the widespread application of this technology. In addition, due to the specificity of individual phages, for a high chance of success, a mixture of phages is often applied. This means that 'banks' containing many different phages are needed to be kept and regularly updated with new phages, which makes regulatory testing for safety harder and more expensive. Some bacteria, for example clostridium and mycobacterium, have no known therapeutic phages available as yet. To work, the virus has to reach the site of the bacteria, and unlike antibiotics, viruses do not necessarily reach the same places that antibiotics can reach. [7] Funding for phage therapy research and clinical trials is generally insufficient and difficult to obtain, since it is a lengthy and complex process to patent bacteriophage products. Scientists comment that 'the biggest hurdle is regulatory', whereas an official view is that individual phages would need proof individually because it would be too complicated to do as a combination, with many variables. CSLPublic awareness and education about phage therapy are generally limited to scientific or independent research rather than mainstream media. ## Safety Phage therapy is generally considered safe. As with antibiotic therapy and other methods of countering bacterial infections, endotoxins are released by the bacteria as they are destroyed within the patient (Herxheimer reaction).[8] This can cause symptoms of fever. Care has to be taken in manufacture that the phage medium is free of bacterial fragments and endotoxins from the production process. Lysogenic bacteriophages are not generally used therapeutically. This group can act as a way for bacteria to exchange DNA, and this can help spread antibiotic resistance or even, theoretically, can make the bacteria pathogenic (see Cholera). The lytic bacteriophages available for phage therapy are best kept refrigerated but discarded if the pale yellow clear liquid goes cloudy.
https://www.wikidoc.org/index.php/Phage_therapy
487c98ca9f46cc424504f6e04bfd16427fa1df55
wikidoc
Phenylalanine
Phenylalanine Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. # Overview Phenylalanine (abbreviated as Phe or F) is an α-amino acid with the formula HO2CCH(NH2)CH2C6H5. This essential amino acid is classified as nonpolar because of the hydrophobic nature of the benzyl side chain. The codons for L-phenylalanine are UUU and UUC. It is a white, powdery solid. L-Phenylalanine (LPA) is an electrically-neutral amino acid, one of the twenty common amino acids used to biochemically form proteins, coded for by DNA. # Biosynthesis Phenylalanine cannot be made by animals, which have to obtain it from their diet. It is produced by plants and most microorganisms from prephenate, an intermediate on the shikimate pathway. Prephenate is decarboxylated with loss of the hydroxyl group to give phenylpyruvate. This species is transaminated using glutamate as the nitrogen source to give phenylalanine and α-ketoglutarate. # Other biological roles L-phenylalanine can also be converted into L-tyrosine, another one of the DNA-encoded amino acids. L-tyrosine in turn is converted into L-DOPA, which is further converted into dopamine, norepinephrine (noradrenaline), and epinephrine (adrenaline) (the latter three are known as the catecholamines). Phenylalanine uses the same active transport channel as tryptophan to cross the blood-brain barrier, and, in large quantities, interferes with the production of serotonin. Lignin is derived from phenylalanine and from tyrosine. Phenylalanine is converted to cinnamic acid by the enzyme phenylalanine ammonia lyase. ## Phenylketonuria The genetic disorder phenylketonuria (PKU) is the inability to metabolize phenylalanine. Individuals with this disorder are known as "phenylketonurics" and must abstain from consumption of phenylalanine. This dietary restriction also applies to pregnant women with hyperphenylalanine (high levels of phenylalanine in blood) because they do not properly metabolize the amino acid phenylalanine. Persons suffering from PKU must monitor their intake of protein to control the buildup of phenylalanine as their bodies convert protein into its component amino acids. A related issue is the compound present in many sugarless gums and mints, snack foods, sugarless soft drinks (such as diet sodas including CocaCola Zero, Pepsi Max, some forms of Lipton Tea, diet Nestea, Clear Splash flavored water), and a number of other low calorie food products. The artificial sweetener aspartame, sold under the names "Equal" and "NutraSweet", is an ester that is hydrolyzed in the body to give phenylalanine, aspartic acid, and methanol (wood alcohol). The breakdown problems phenylketonurics have with protein and the attendant build up of phenylalanine in the body also occurs with the ingestion of aspartame, although to a lesser degree. Accordingly, all products in the U.S. and Canada that contain aspartame must be labeled: "Phenylketonurics: Contains phenylalanine." In the UK, foods containing aspartame must carry ingredients panels that refer to the presence of 'aspartame or E951', and they must be labeled with a warning "Contains a source of phenylalanine". These warnings are specifically placed to aid individuals who suffer from PKU so that they can avoid such foods. Interestingly, the macaque genome was recently sequenced and it was found that macaques naturally have a mutation that is found in humans who have PKU. # D- and DL-phenylalanine D-phenylalanine (DPA) either as a single enantiomer or as a component of the racemic mixture is available through conventional organic synthesis. It does not participate in protein biosynthesis although it is found in proteins, in small amounts, particularly aged proteins and food proteins that have been processed. The biological functions of D-amino acids remain unclear. Some D-amino acids, such as D-phenylalanine, may have pharmacological activity. DL-Phenylalanine is marketed as a nutritional supplement for its putative analgesic and antidepressant activities. The putative analgesic activity of DL-phenylalanine may be explained by the possible blockage by D-phenylalanine of enkephalin degradation by the enzyme carboxypeptidase A. The mechanism of DL-phenylalanine's putative antidepressant activity may be accounted for by the precursor role of L-phenylalanine in the synthesis of the neurotransmitters norepinephrine and dopamine. Elevated brain norepinephrine and dopamine levels are thought to be associated with antidepressant effects. D-phenylalanine is absorbed from the small intestine, following ingestion, and transported to the liver via the portal circulation. A fraction of D-phenylalanine appears to be converted to L-phenylalanine. D-phenylalanine is distributed to the various tissues of the body via the systemic circulation. D-phenylalanine appears to cross the blood-brain barrier with less efficiency than L-phenylalanine. A fraction of an ingested dose of D-phenylalanine is excreted in the urine. # History The genetic codon for phenylalanine was the first to be discovered. Marshall W. Nirenberg discovered that insertion of m-RNA made up of multiple uracil repeats into E. coli, the bacterium produced a new protein, made up solely of repeated phenylalanine amino acids.
Phenylalanine Template:NatOrganicBox Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us [1] to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. # Overview Phenylalanine (abbreviated as Phe or F)[1] is an α-amino acid with the formula HO2CCH(NH2)CH2C6H5. This essential amino acid is classified as nonpolar because of the hydrophobic nature of the benzyl side chain. The codons for L-phenylalanine are UUU and UUC. It is a white, powdery solid. L-Phenylalanine (LPA) is an electrically-neutral amino acid, one of the twenty common amino acids used to biochemically form proteins, coded for by DNA. # Biosynthesis Phenylalanine cannot be made by animals, which have to obtain it from their diet. It is produced by plants and most microorganisms from prephenate, an intermediate on the shikimate pathway.[2] Prephenate is decarboxylated with loss of the hydroxyl group to give phenylpyruvate. This species is transaminated using glutamate as the nitrogen source to give phenylalanine and α-ketoglutarate. # Other biological roles L-phenylalanine can also be converted into L-tyrosine, another one of the DNA-encoded amino acids. L-tyrosine in turn is converted into L-DOPA, which is further converted into dopamine, norepinephrine (noradrenaline), and epinephrine (adrenaline) (the latter three are known as the catecholamines). Phenylalanine uses the same active transport channel as tryptophan to cross the blood-brain barrier, and, in large quantities, interferes with the production of serotonin. Lignin is derived from phenylalanine and from tyrosine. Phenylalanine is converted to cinnamic acid by the enzyme phenylalanine ammonia lyase.[2] ## Phenylketonuria The genetic disorder phenylketonuria (PKU) is the inability to metabolize phenylalanine. Individuals with this disorder are known as "phenylketonurics" and must abstain from consumption of phenylalanine. This dietary restriction also applies to pregnant women with hyperphenylalanine (high levels of phenylalanine in blood) because they do not properly metabolize the amino acid phenylalanine. Persons suffering from PKU must monitor their intake of protein to control the buildup of phenylalanine as their bodies convert protein into its component amino acids. A related issue is the compound present in many sugarless gums and mints, snack foods, sugarless soft drinks (such as diet sodas including CocaCola Zero, Pepsi Max, some forms of Lipton Tea, diet Nestea, Clear Splash flavored water), and a number of other low calorie food products. The artificial sweetener aspartame, sold under the names "Equal" and "NutraSweet", is an ester that is hydrolyzed in the body to give phenylalanine, aspartic acid, and methanol (wood alcohol). The breakdown problems phenylketonurics have with protein and the attendant build up of phenylalanine in the body also occurs with the ingestion of aspartame, although to a lesser degree. Accordingly, all products in the U.S. and Canada that contain aspartame must be labeled: "Phenylketonurics: Contains phenylalanine." In the UK, foods containing aspartame must carry ingredients panels that refer to the presence of 'aspartame or E951', [2]and they must be labeled with a warning "Contains a source of phenylalanine". These warnings are specifically placed to aid individuals who suffer from PKU so that they can avoid such foods. Interestingly, the macaque genome was recently sequenced and it was found that macaques naturally have a mutation that is found in humans who have PKU.[3] # D- and DL-phenylalanine D-phenylalanine (DPA) either as a single enantiomer or as a component of the racemic mixture is available through conventional organic synthesis. It does not participate in protein biosynthesis although it is found in proteins, in small amounts, particularly aged proteins and food proteins that have been processed. The biological functions of D-amino acids remain unclear. Some D-amino acids, such as D-phenylalanine, may have pharmacological activity. DL-Phenylalanine is marketed as a nutritional supplement for its putative analgesic and antidepressant activities. The putative analgesic activity of DL-phenylalanine may be explained by the possible blockage by D-phenylalanine of enkephalin degradation by the enzyme carboxypeptidase A. The mechanism of DL-phenylalanine's putative antidepressant activity may be accounted for by the precursor role of L-phenylalanine in the synthesis of the neurotransmitters norepinephrine and dopamine. Elevated brain norepinephrine and dopamine levels are thought to be associated with antidepressant effects. D-phenylalanine is absorbed from the small intestine, following ingestion, and transported to the liver via the portal circulation. A fraction of D-phenylalanine appears to be converted to L-phenylalanine. D-phenylalanine is distributed to the various tissues of the body via the systemic circulation. D-phenylalanine appears to cross the blood-brain barrier with less efficiency than L-phenylalanine. A fraction of an ingested dose of D-phenylalanine is excreted in the urine. # History The genetic codon for phenylalanine was the first to be discovered. Marshall W. Nirenberg discovered that insertion of m-RNA made up of multiple uracil repeats into E. coli, the bacterium produced a new protein, made up solely of repeated phenylalanine amino acids.
https://www.wikidoc.org/index.php/Phe
ca4ca3fe51143c64dca232b828b033e79738de91
wikidoc
Phencyclidine
Phencyclidine # Overview Phencyclidine (a contraction of the chemical name phenylcyclohexylpiperidine), abbreviated PCP, is a dissociative drug formerly used as an anesthetic agent, exhibiting hallucinogenic and neurotoxic effects. It was first patented in 1952 by the Parke-Davis pharmaceutical company and marketed under the brand name Sernyl. PCP is listed as a Schedule II drug in the United States under the Convention on Psychotropic Substances. In chemical structure, PCP is an arylcyclohexylamine derivative, and, in pharmacology, it is a member of the family of dissociative anesthetics. PCP works primarily as an NMDA receptor antagonist, which blocks the activity of the NMDA Receptor. Other NMDA receptor antagonists include ketamine, tiletamine, and dextromethorphan. Although the primary psychoactive effects of the drug last only hours, total elimination from the body is prolonged, typically extending over weeks. More than 30 different analogues of PCP were reported as being used on the street during the 1970s and 1980s, mainly in the USA. The best known of these are PCPy (Rolicyclidine, 1-(1-phenylcyclohexyl)pyrrolidine); PCE (Eticyclidine, N-ethyl-1-phenylcyclohexylamine); and TCP (Tenocyclidine, 1-(1-(2-Thienyl)cyclohexyl)piperidine). These compounds were never widely-used and did not seem to be as well-accepted by users as PCP itself, however they were all added onto Schedule I of the Controlled Substance Act because of their putative similar effects. The generalised structural motif required for PCP-like activity is derived from structure-activity relationship studies of PCP analogues, and summarised below. All of these analogues would have somewhat similar effects to PCP itself, although, with a range of potencies and varying mixtures of anaesthetic, dissociative and stimulant effects depending on the particular substituents used. In some countries such as the USA, Australia, and New Zealand, all of these compounds would be considered controlled substance analogues of PCP, and are hence illegal drugs, even though many of them have never been made or tested. # Danger Like other NMDA receptor antagonists, it is postulated that phencyclidine can cause a certain kind of brain damage called Olney's Lesions. Studies conducted on rats showed that high doses of the NMDA receptor antagonist MK-801 caused reversible vacuoles to form in certain regions of the rats' brains, and experts say that it is possible that similar brain damage can occur in humans. All studies on Olney's lesions were performed only on animals and may not apply to humans. Critics have cited poorly-performed studies and differences in animal metabolism to suggest that Olney's lesions may not occur in humans. # Medical Use PCP was first tested after World War I as a surgical anesthetic. Because of its adverse side-effects, such as hallucinations, mania, delirium, and disorientation, it was shelved until the 1950s. In 1963, it was patented by Parke-Davis and named Sernyl (referring to serenity), but was withdrawn from the market two years later because of side-effects. It was renamed Sernylan in 1967, and marketed as a veterinary anaesthetic, but again discontinued. Its side-effects and long half-life in the human body made it unsuitable for medical applications. PCP is retained in fatty tissue and is broken down by the human metabolism into PCHP, PPC and PCAA. When smoked, some of it is broken down by heat into 1-phenyl-1-cyclohexene (PC) and piperidine. # Recreational use PCP is consumed in a recreational manner by drug users. Compton (near Los Angeles) remains the primary source of PCP throughout the United States. Los Angeles street gangs continue to control both production and distribution of PCP. It comes in both powder and liquid forms (PCP base dissolved most often in ether), but typically it is sprayed onto leafy material such as marijuana, mint, oregano, parsley, or ginger leaves, then smoked. Common street names for the drug vary from locale to locale, but include "angel dust," "illy," "wet," "fry," "amp," "Nature Boy," and "supergrass" (when combined with marijuana). PCP is a Schedule II substance in the United States and a Class A substance in the United Kingdom. ## Biochemical action The N-methyl-D-Aspartate (NMDA) receptor, a type of ionotropic receptor, is found on the dendrites of neurons and receives signals in the form of neurotransmitters. It is a major excitatory receptor in the brain. Normal physiological function requires that the activated receptor fluxes positive ions through the channel part of the receptor. PCP enters the ion channel from the outside of the neuron and binds, reversibly, to a site in the channel pore, blocking the flux of positive ions into the cell. PCP therefore inhibits depolarization of neurons and interferes with cognitive and other functions of the nervous system. In a similar manner, PCP and analogues also inhibit nicotinic acetylcholine receptor channels (nAChR). Some analogues have greater potency at nAChR than at NMDAR. In some brain regions, these effects act synergistically to inhibit excitatory activity. ## Method of absorption The term "embalming fluid" is often used to refer to the liquid PCP in which a cigarette or joint is dipped (a "sherm" or "dippy"), to be ingested through smoking. Smoking PCP is known as "getting wet." There is much confusion over the practice of dipping cigarettes in "embalming fluid" leading some to think that real embalming fluid may actually be used. This is a misconception that may cause serious health consequences beyond those of consuming PCP. In its powder form, PCP can be insufflated. In Canada, particularly in the provinces of Quebec and New Brunswick, PCP is mostly encountered as "mescaline" (often locally called "mess" or "mesc"), although most local users are aware that the drug is not, in fact, mescaline, but is actually a mixture of quinine or lactose and PCP freebase. The most common form of ingesting PCP is through smoking; however, the drug may also be insufflated. In its pure form, PCP is a white crystalline powder that readily dissolves in water. However, most PCP on the illicit market contains a number of contaminants as a result of makeshift manufacturing, causing the color to range from tan to brown, and the consistency to range from powder to a gummy mass. ## Effects PCP gives a feeling of being disconnected to one's body and environment. PCP has potent effects on the nervous system, altering perceptual functions (hallucinations, delusional ideas, delirium or confused thinking), motor functions (unsteady gait, loss of coordination, and disrupted eye movement or nystagmus), and autonomic nervous system regulation (rapid heart rate, altered temperature regulation). The drug has been known to alter mood states in an unpredictable fashion, causing some individuals to become detached, and others to become animated. Intoxicated individuals may act in an unpredictable fashion, driven by their delusions or hallucinations. Included in the portfolio of behavioral disturbances are acts of self-injury including suicide, and attacks on others or destruction of property. The analgesic properties of the drug can cause users to feel less pain, and persist in violent or injurious acts as a result. Recreational doses of the drug can also induce a psychotic state that resembles schizophrenic episodes. # Phencyclidine Intoxication ## DSM-V Diagnostic Criteria for Phencyclidine ## Epidemiology and Demographics ### Prevalence The prevalence of phencyclidine intoxication is 2,500 per 100,000 (2.5%) of the overall population. ## Differential Diagnosis - Other substance intoxication - Anticholinergics - Amphetamine - Cocaine - Hallucinogens - Withdrawal from benzodiazepines - Other stimulants - Other conditions - Central nervous system tumors - Depression - Hyponatremia - Hypoglycemia - Neuroleptic malignant syndrome - Schizophrenia - Seizure disorders - Sepsis - Withdrawal from other drugs - Sedatives - Alcohol - Vascular insults # Phencyclidine Use Disorder ## DSM-V Diagnostic Criteria for Phencyclidine Use Disorder ## Epidemiology and Demographics ### Prevalence The prevalence of phencyclidine use disorder is unknown. ## Risk Factors - Age - Lower educational levels - Geographical - West & - Northeast regions of the United States ## Differential Diagnosis - Other substance use disorders - Cannabis - Cocaine - Schizophrenia and other mental disorders - Antisocial personality disorder - Conduct disorder - Major depressive disorder
Phencyclidine Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]; Associate Editor(s)-in-Chief: Kiran Singh, M.D. [2] # Overview Phencyclidine (a contraction of the chemical name phenylcyclohexylpiperidine), abbreviated PCP, is a dissociative drug formerly used as an anesthetic agent, exhibiting hallucinogenic and neurotoxic effects.[citation needed] It was first patented in 1952 by the Parke-Davis pharmaceutical company and marketed under the brand name Sernyl. PCP is listed as a Schedule II drug in the United States under the Convention on Psychotropic Substances.[1] In chemical structure, PCP is an arylcyclohexylamine derivative, and, in pharmacology, it is a member of the family of dissociative anesthetics. PCP works primarily as an NMDA receptor antagonist, which blocks the activity of the NMDA Receptor.[2] Other NMDA receptor antagonists include ketamine, tiletamine, and dextromethorphan. Although the primary psychoactive effects of the drug last only hours, total elimination from the body is prolonged, typically extending over weeks. More than 30 different analogues of PCP were reported as being used on the street during the 1970s and 1980s, mainly in the USA. The best known of these are PCPy (Rolicyclidine, 1-(1-phenylcyclohexyl)pyrrolidine); PCE (Eticyclidine, N-ethyl-1-phenylcyclohexylamine); and TCP (Tenocyclidine, 1-(1-(2-Thienyl)cyclohexyl)piperidine). These compounds were never widely-used and did not seem to be as well-accepted by users as PCP itself, however they were all added onto Schedule I of the Controlled Substance Act because of their putative similar effects.[3] The generalised structural motif required for PCP-like activity is derived from structure-activity relationship studies of PCP analogues, and summarised below. All of these analogues would have somewhat similar effects to PCP itself, although, with a range of potencies and varying mixtures of anaesthetic, dissociative and stimulant effects depending on the particular substituents used. In some countries such as the USA, Australia, and New Zealand, all of these compounds would be considered controlled substance analogues of PCP, and are hence illegal drugs, even though many of them have never been made or tested.[4][5] # Danger Like other NMDA receptor antagonists, it is postulated that phencyclidine can cause a certain kind of brain damage called Olney's Lesions.[6][7] Studies conducted on rats showed that high doses of the NMDA receptor antagonist MK-801 caused reversible vacuoles to form in certain regions of the rats' brains, and experts say that it is possible that similar brain damage can occur in humans.[8] All studies on Olney's lesions were performed only on animals and may not apply to humans. Critics have cited poorly-performed studies and differences in animal metabolism to suggest that Olney's lesions may not occur in humans.[9][10] # Medical Use PCP was first tested after World War I as a surgical anesthetic. Because of its adverse side-effects, such as hallucinations, mania, delirium, and disorientation, it was shelved until the 1950s. In 1963, it was patented by Parke-Davis and named Sernyl (referring to serenity), but was withdrawn from the market two years later because of side-effects. It was renamed Sernylan in 1967, and marketed as a veterinary anaesthetic, but again discontinued. Its side-effects and long half-life in the human body made it unsuitable for medical applications. PCP is retained in fatty tissue and is broken down by the human metabolism into PCHP, PPC and PCAA. When smoked, some of it is broken down by heat into 1-phenyl-1-cyclohexene (PC) and piperidine. # Recreational use PCP is consumed in a recreational manner by drug users. Compton (near Los Angeles) remains the primary source of PCP throughout the United States. Los Angeles street gangs continue to control both production and distribution of PCP.[11] It comes in both powder and liquid forms (PCP base dissolved most often in ether), but typically it is sprayed onto leafy material such as marijuana, mint, oregano, parsley, or ginger leaves, then smoked. Common street names for the drug vary from locale to locale, but include "angel dust," "illy," "wet," "fry," "amp," "Nature Boy," and "supergrass" (when combined with marijuana). PCP is a Schedule II substance in the United States and a Class A substance in the United Kingdom. ## Biochemical action The N-methyl-D-Aspartate (NMDA) receptor, a type of ionotropic receptor, is found on the dendrites of neurons and receives signals in the form of neurotransmitters. It is a major excitatory receptor in the brain. Normal physiological function requires that the activated receptor fluxes positive ions through the channel part of the receptor. PCP enters the ion channel from the outside of the neuron and binds, reversibly, to a site in the channel pore, blocking the flux of positive ions into the cell. PCP therefore inhibits depolarization of neurons and interferes with cognitive and other functions of the nervous system. In a similar manner, PCP and analogues also inhibit nicotinic acetylcholine receptor channels (nAChR). Some analogues have greater potency at nAChR than at NMDAR. In some brain regions, these effects act synergistically to inhibit excitatory activity. ## Method of absorption The term "embalming fluid" is often used to refer to the liquid PCP in which a cigarette or joint is dipped (a "sherm" or "dippy"), to be ingested through smoking. Smoking PCP is known as "getting wet." There is much confusion over the practice of dipping cigarettes in "embalming fluid" leading some to think that real embalming fluid may actually be used. This is a misconception that may cause serious health consequences beyond those of consuming PCP. In its powder form, PCP can be insufflated. In Canada, particularly in the provinces of Quebec and New Brunswick, PCP is mostly encountered as "mescaline" (often locally called "mess" or "mesc"), although most local users are aware that the drug is not, in fact, mescaline, but is actually a mixture of quinine or lactose and PCP freebase. The most common form of ingesting PCP is through smoking; however, the drug may also be insufflated. In its pure form, PCP is a white crystalline powder that readily dissolves in water. However, most PCP on the illicit market contains a number of contaminants as a result of makeshift manufacturing, causing the color to range from tan to brown, and the consistency to range from powder to a gummy mass. ## Effects PCP gives a feeling of being disconnected to one's body and environment. PCP has potent effects on the nervous system, altering perceptual functions (hallucinations, delusional ideas, delirium or confused thinking), motor functions (unsteady gait, loss of coordination, and disrupted eye movement or nystagmus), and autonomic nervous system regulation (rapid heart rate, altered temperature regulation). The drug has been known to alter mood states in an unpredictable fashion, causing some individuals to become detached, and others to become animated. Intoxicated individuals may act in an unpredictable fashion, driven by their delusions or hallucinations. Included in the portfolio of behavioral disturbances are acts of self-injury including suicide, and attacks on others or destruction of property. The analgesic properties of the drug can cause users to feel less pain, and persist in violent or injurious acts as a result. Recreational doses of the drug can also induce a psychotic state that resembles schizophrenic episodes. # Phencyclidine Intoxication ## DSM-V Diagnostic Criteria for Phencyclidine[12] ## Epidemiology and Demographics ### Prevalence The prevalence of phencyclidine intoxication is 2,500 per 100,000 (2.5%) of the overall population.[12] ## Differential Diagnosis - Other substance intoxication - Anticholinergics - Amphetamine - Cocaine - Hallucinogens - Withdrawal from benzodiazepines - Other stimulants - Other conditions - Central nervous system tumors - Depression - Hyponatremia - Hypoglycemia - Neuroleptic malignant syndrome - Schizophrenia - Seizure disorders - Sepsis - Withdrawal from other drugs - Sedatives - Alcohol - Vascular insults[12] # Phencyclidine Use Disorder ## DSM-V Diagnostic Criteria for Phencyclidine Use Disorder[12] ## Epidemiology and Demographics ### Prevalence The prevalence of phencyclidine use disorder is unknown.[12] ## Risk Factors - Age - Lower educational levels - Geographical - West & - Northeast regions of the United States[12] ## Differential Diagnosis - Other substance use disorders - Cannabis - Cocaine - Schizophrenia and other mental disorders - Antisocial personality disorder - Conduct disorder - Major depressive disorder[12]
https://www.wikidoc.org/index.php/Phencyclidine
eb6bad4e420fda313376e1087340aec7fc5fbbbb
wikidoc
Phenmetrazine
Phenmetrazine Phenmetrazine is a stimulant of the central nervous system. It was previously sold under the trade name Preludin as an anorectic. Preludin has since been removed from the market. It was initially replaced by the weaker analogue Phendimetrazine (Bontril), but this is now only rarely prescribed, due to problems with abuse. Other names that have been used for Phenmetrazine include: Defenmetrazin, Fenmetrazin, Oxazimedrine, Phenmetraline. # History It was first patented in Germany in 1952 by Boehringer-Ingelheim. It was the result of a search by Thomae and Wick for an anorectic substance without the side-effects of amphetamine . Phenmetrazine was introduced into clinical use in 1954 in Europe . # Medical use In clinical use phenmetrazine produces less nervousness, hyperexcitability, euphoria and insomnia than the amphetamines. It doesn't tend to increase the pulse either. Due to the relative lack of side-effects, one study found it well tolerated in children. In a study of the effectiveness on weight loss between phenmetrazine and dextroamphetamine, phenmetrazine was found to be slightly more efficient. Even though the manufacturers claimed it had "exceptional safety and strikingly low incidence of side effects", within some years there were some reports of psychotic reactions of the amphetamine type. # Pharmacology Phenmetrazine produces its action by causing release of noradrenaline and dopamine in the central nervous system. After an oral dose, about 70% of the drug is excreted from the body within 24 hours. About 19% of that is excreted as the unmetabolised drug and the rest as various metabolites In trials in rats it has been found that after subcutaneous administration, both the optical isomers of phenmetrazine is equally effective in reducing food intake, but in oral administration the levo isomer is more effective. In terms of central stimulation however, the dextro isomer is about 4 times as effective in both methods of administration. # Abuse It is by some considered to have a greater potential for addiction than the amphetamines, and has been abused in many countries, for example Sweden. When stimulant abuse first became prevalent in Sweden in the 1950s, phenmetrazine was preferred to amphetamine and methamphetamine by addicts, as it was considered the superior drug. In the autobiographical novel "Rush" by Kim Wozencraft, intravenous phenmetrazine is described as the most euphoric and pro-sexual of the stimulants the author used. Phenmetrazine was classified as a narcotic in Sweden in 1959, and was taken completely off the market in 1965. At first the illegal demand was satisfied by smuggling from Germany and later Spain and Italy. At first Preludin tablets were smuggled, but soon the smugglers started bringing in raw phenmetrazine powder. Eventually Amphetamine became the dominant stimulant of abuse because of its easier availability. The drug was taken by The Beatles early in their career. Paul McCartney was one known user. McCartney's introduction to drugs started in Hamburg, Germany. The Beatles had to play for hours, and they were often given "Prellies" (Preludin) by German customers or by Astrid Kirchherr (whose mother bought them). McCartney would usually take one, but John Lennon would often take four or five.
Phenmetrazine Phenmetrazine is a stimulant of the central nervous system. It was previously sold under the trade name Preludin as an anorectic. Preludin has since been removed from the market. It was initially replaced by the weaker analogue Phendimetrazine (Bontril), but this is now only rarely prescribed, due to problems with abuse. Other names that have been used for Phenmetrazine include: Defenmetrazin, Fenmetrazin, Oxazimedrine, Phenmetraline. # History It was first patented in Germany in 1952 by Boehringer-Ingelheim. It was the result of a search by Thomae and Wick for an anorectic substance without the side-effects of amphetamine[1] . Phenmetrazine was introduced into clinical use in 1954 in Europe[2] . # Medical use In clinical use phenmetrazine produces less nervousness, hyperexcitability, euphoria and insomnia than the amphetamines[3]. It doesn't tend to increase the pulse either[1]. Due to the relative lack of side-effects, one study found it well tolerated in children[1]. In a study of the effectiveness on weight loss between phenmetrazine and dextroamphetamine, phenmetrazine was found to be slightly more efficient[4]. Even though the manufacturers claimed it had "exceptional safety and strikingly low incidence of side effects", within some years there were some reports of psychotic reactions of the amphetamine type[2]. # Pharmacology Phenmetrazine produces its action by causing release of noradrenaline and dopamine in the central nervous system[5]. After an oral dose, about 70% of the drug is excreted from the body within 24 hours. About 19% of that is excreted as the unmetabolised drug and the rest as various metabolites[6] In trials in rats it has been found that after subcutaneous administration, both the optical isomers of phenmetrazine is equally effective in reducing food intake, but in oral administration the levo isomer is more effective. In terms of central stimulation however, the dextro isomer is about 4 times as effective in both methods of administration[7]. # Abuse It is by some considered to have a greater potential for addiction than the amphetamines, and has been abused in many countries, for example Sweden. When stimulant abuse first became prevalent in Sweden in the 1950s, phenmetrazine was preferred to amphetamine and methamphetamine by addicts, as it was considered the superior drug. In the autobiographical novel "Rush" by Kim Wozencraft, intravenous phenmetrazine is described as the most euphoric and pro-sexual of the stimulants the author used. Phenmetrazine was classified as a narcotic in Sweden in 1959, and was taken completely off the market in 1965. At first the illegal demand was satisfied by smuggling from Germany and later Spain and Italy. At first Preludin tablets were smuggled, but soon the smugglers started bringing in raw phenmetrazine powder. Eventually Amphetamine became the dominant stimulant of abuse because of its easier availability. The drug was taken by The Beatles early in their career. Paul McCartney was one known user. McCartney's introduction to drugs started in Hamburg, Germany. The Beatles had to play for hours, and they were often given "Prellies" (Preludin) by German customers or by Astrid Kirchherr (whose mother bought them). McCartney would usually take one, but John Lennon would often take four or five.[8]
https://www.wikidoc.org/index.php/Phenmetrazine
2408f9e2b9286e003e5c211e12272d0f2665cefa
wikidoc
Phenoperidine
Phenoperidine # Overview Phenoperidine, marketed as its hydrochloride as Operidine or Lealgin, is an opioid used as a general anesthetic. It is a derivative of isonipecotic acid, like pethidine, and is metabolized in part to norpethidine. It is 20-200 times as potent as pethidine as an analgesic. In humans 1 milligram is equipotent with 10 mg morphine. It has less effect on the circulatory system and is less hypnotic than morphine, but it has about the same emetic effect. The nausea can be prevented by giving droperidol or haloperidol. After an intravenous dose the analgesia sets in after 3-5 minutes. Phenoperidine shares structural similarities with both pethidine and haloperidol (and related butyrophenone antipsychotics, e.g. droperidol). While not commonly used today in clinical practice, it is of historical interest as a precursor in the development of some of the most widely used neuroleptic drugs on the market today.
Phenoperidine Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1] # Overview Phenoperidine, marketed as its hydrochloride as Operidine or Lealgin, is an opioid used as a general anesthetic. It is a derivative of isonipecotic acid, like pethidine, and is metabolized in part to norpethidine. It is 20-200 times as potent as pethidine as an analgesic. In humans 1 milligram is equipotent with 10 mg morphine. It has less effect on the circulatory system and is less hypnotic than morphine, but it has about the same emetic effect. The nausea can be prevented by giving droperidol or haloperidol. After an intravenous dose the analgesia sets in after 3-5 minutes.[1] Phenoperidine shares structural similarities with both pethidine and haloperidol (and related butyrophenone antipsychotics, e.g. droperidol). While not commonly used today in clinical practice, it is of historical interest as a precursor in the development of some of the most widely used neuroleptic drugs on the market today.
https://www.wikidoc.org/index.php/Phenoperidine
c0f6f2a55eacb250bcc5915168f7e0c6e6c7547b
wikidoc
Phenylacetone
Phenylacetone Phenylacetone, sometimes abbreviated P2P is an organic compound. It is a clear oil with a refractive index of 1.5168. This chemical is used in the manufacture of methamphetamine and amphetamine. Due to the illicit uses in clandestine chemistry, it was made a controlled substance in 1979 in the United States. # Preparation There are many methods in the scientific literature to prepare phenylacetone, and due to its controlled nature there is crossover into popular literature such as works by Uncle Fester and Alexander Shulgin. Not surprisingly there is also a fair amount of data available on the Internet relating to the preparation of phenylacetone. A conceptually simple, although low-yielding, example of phenylacetone organic synthesis is the Friedel-Crafts alkylation of benzene with chloroacetone. The reaction is low yielding because the monoalkylation product is activated towards additional substitution at the ortho and para positions. Phenylacetone synthesis via the Friedel-Crafts alkylation of Benzene with chloroacetone. Phenylacetone can also be produced from many other chemicals. For example, phenylacetic acid is distilled with lead acetate to yield phenylacetone. Another is, benzaldehyde is reacted with nitroethane yielding phenyl-2-nitropropene, which is reduced, usually in the presence of acid, to phenylacetone. # See Also - MDP2P - a phenylacetone with a methylenedioxy group, using for making MDMA (Ecstasy). de:Phenylaceton
Phenylacetone Template:Chembox new Phenylacetone, sometimes abbreviated P2P is an organic compound. It is a clear oil with a refractive index of 1.5168. This chemical is used in the manufacture of methamphetamine and amphetamine. Due to the illicit uses in clandestine chemistry, it was made a controlled substance in 1979 in the United States. # Preparation There are many methods in the scientific literature to prepare phenylacetone, and due to its controlled nature there is crossover into popular literature such as works by Uncle Fester and Alexander Shulgin. Not surprisingly there is also a fair amount of data available on the Internet relating to the preparation of phenylacetone. A conceptually simple, although low-yielding, example of phenylacetone organic synthesis is the Friedel-Crafts alkylation of benzene with chloroacetone. The reaction is low yielding because the monoalkylation product is activated towards additional substitution at the ortho and para positions. Phenylacetone synthesis via the Friedel-Crafts alkylation of Benzene with chloroacetone. Phenylacetone can also be produced from many other chemicals. For example, phenylacetic acid is distilled with lead acetate to yield phenylacetone. Another is, benzaldehyde is reacted with nitroethane yielding phenyl-2-nitropropene, which is reduced, usually in the presence of acid, to phenylacetone. # See Also - MDP2P - a phenylacetone with a methylenedioxy group, using for making MDMA (Ecstasy). de:Phenylaceton
https://www.wikidoc.org/index.php/Phenylacetone
16657c970acc16e4d0a7f275a4a356faa90d153d
wikidoc
Phenylethanol
Phenylethanol {chembox - 60-12-8 N - Interactive image - OCCc1ccccc1 Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. Phenethyl alcohol, or 2-phenylethanol, is the organic compound with the formula C6H5CH2CH2OH. This colourless liquid occurs widely in nature, being found in a variety of essential oils, including rose, carnation, hyacinth, Aleppo pine, orange blossom, ylang-ylang, geranium, neroli, and champaca. It is slightly soluble in water (2 mL/100 mL H2O), but miscible with ethanol and ether. Phenethyl alcohol is an alcohol with a pleasant floral odor. It is therefore a common ingredient in flavors and perfumery, particularly when the smell of rose is desired. It is used an an additive in cigarettes. It is also used as a preservative in soaps due to its stability in basic conditions. In biology it is of interest due to its antimicrobial properties.
Phenylethanol {chembox - 60-12-8 N - Interactive image Template:Chembox E number - OCCc1ccccc1 Template:Chembox Density Template:Chembox MeltingPt Template:Chembox BoilingPt Template:Chembox NFPA Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us [1] to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. Phenethyl alcohol, or 2-phenylethanol, is the organic compound with the formula C6H5CH2CH2OH. This colourless liquid occurs widely in nature, being found in a variety of essential oils, including rose, carnation, hyacinth, Aleppo pine, orange blossom, ylang-ylang, geranium, neroli, and champaca. It is slightly soluble in water (2 mL/100 mL H2O), but miscible with ethanol and ether. Phenethyl alcohol is an alcohol with a pleasant floral odor. It is therefore a common ingredient in flavors and perfumery, particularly when the smell of rose is desired. It is used an an additive in cigarettes. It is also used as a preservative in soaps due to its stability in basic conditions. In biology it is of interest due to its antimicrobial properties.
https://www.wikidoc.org/index.php/Phenylethanol
c687be3b6369be7ea225de32bd8330714f9a6419
wikidoc
Phospholamban
Phospholamban Phospholamban, also known as PLN or PLB, is a micropeptide protein that in humans is encoded by the PLN gene. Phospholamban is a 52-amino acid integral membrane protein that regulates the Calcium (Ca2+) pump in cardiac muscle cells. # Function This protein is found as a pentamer and is a major substrate for the cAMP-dependent protein kinase (PKA) in cardiac muscle. In the unphosphorylated state, phospholamban is an inhibitor of cardiac muscle sarcoplasmic reticulum Ca++-ATPase (SERCA2) which transports calcium from cytosol into the sarcoplasmic reticulum. When phosphorylated (by PKA) - disinhibition of Ca++-ATPase of SR leads to faster Ca++ uptake into the sarcoplasmic reticulum, thereby contributing to the lusitropic response elicited in heart by beta-agonists. The protein is a key regulator of cardiac diastolic function. Mutations in this gene are a cause of inherited human dilated cardiomyopathy with refractory congestive heart failure. When phospholamban is phosphorylated by PKA, its ability to inhibit SERCA2 is lost. Thus, activators of PKA, such as the beta-adrenergic agonist epinephrine (released by sympathetic stimulation), may enhance the rate of cardiac myocyte relaxation. In addition, since SERCA2 is more active, the next action potential will cause an increased release of calcium, resulting in increased contraction (positive inotropic effect). When phospholamban is not phosphorylated, such as when PKA is inactive, it can interact with and inhibit SERCA. The overall effect of phospholamban is to decrease contractility and the rate of muscle relaxation, thereby decreasing stroke volume and heart rate, respectively. # Clinical significance Gene knockout of phospholamban results in animals with hyperdynamic hearts, with little apparent negative consequence. Mutations in this gene are a cause of inherited human dilated cardiomyopathy with refractory congestive heart failure . # Discovery Phospholamban was discovered by Arnold Martin Katz and coworkers in 1974. # Interactions PLN has been shown to interact with SLN and SERCA1.
Phospholamban Phospholamban, also known as PLN or PLB, is a micropeptide protein that in humans is encoded by the PLN gene.[1] Phospholamban is a 52-amino acid integral membrane protein that regulates the Calcium (Ca2+) pump in cardiac muscle cells.[2] # Function This protein is found as a pentamer and is a major substrate for the cAMP-dependent protein kinase (PKA) in cardiac muscle. In the unphosphorylated state, phospholamban is an inhibitor of cardiac muscle sarcoplasmic reticulum Ca++-ATPase (SERCA2)[3] which transports calcium from cytosol into the sarcoplasmic reticulum. When phosphorylated (by PKA) - disinhibition of Ca++-ATPase of SR leads to faster Ca++ uptake into the sarcoplasmic reticulum, thereby contributing to the lusitropic response elicited in heart by beta-agonists.[4] The protein is a key regulator of cardiac diastolic function. Mutations in this gene are a cause of inherited human dilated cardiomyopathy with refractory congestive heart failure.[5] When phospholamban is phosphorylated by PKA, its ability to inhibit SERCA2 is lost.[6] Thus, activators of PKA, such as the beta-adrenergic agonist epinephrine (released by sympathetic stimulation), may enhance the rate of cardiac myocyte relaxation. In addition, since SERCA2 is more active, the next action potential will cause an increased release of calcium, resulting in increased contraction (positive inotropic effect). When phospholamban is not phosphorylated, such as when PKA is inactive, it can interact with and inhibit SERCA. The overall effect of phospholamban is to decrease contractility and the rate of muscle relaxation, thereby decreasing stroke volume and heart rate, respectively.[7] # Clinical significance Gene knockout of phospholamban results in animals with hyperdynamic hearts, with little apparent negative consequence.[8] Mutations in this gene are a cause of inherited human dilated cardiomyopathy with refractory congestive heart failure .[9] # Discovery Phospholamban was discovered by Arnold Martin Katz and coworkers in 1974.[10] # Interactions PLN has been shown to interact with SLN[11][12] and SERCA1.[12][13][14]
https://www.wikidoc.org/index.php/Phospholamban
38497cddc7df6ced41aa05b8489cdf23a5fabab5
wikidoc
Photomedicine
Photomedicine Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. Photomedicine is an interdisciplinary branch of medicine that involves the study and application of light with respect to health and disease. Photomedicine may be related to the practice of various fields of medicine including dermatology, surgery, dentistry, optical diagnostics, cardiology, and oncology. # Examples - PUVA for the treatment of psoriasis - Photodynamic therapy (PDT) for treatment of cancer and macular degeneration - Free electron laser - Laser hair removal - Optical diagnostics, for example Optical Coherence Tomography using infrared light of coronary plaques - Confocal microscopy and fluorescence microscopy of in vivo tissue - Diffuse reflectance spectroscopy for in vivo quantification of pigments (normal and cancerous), and hemoglobin - Perpendicular-polarized flash and fluorescence photography of the skin
Photomedicine Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us [1] to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. Photomedicine is an interdisciplinary branch of medicine that involves the study and application of light with respect to health and disease. Photomedicine may be related to the practice of various fields of medicine including dermatology, surgery, dentistry, optical diagnostics, cardiology, and oncology. # Examples - PUVA for the treatment of psoriasis - Photodynamic therapy (PDT) for treatment of cancer and macular degeneration - Free electron laser - Laser hair removal - Optical diagnostics, for example Optical Coherence Tomography using infrared light of coronary plaques - Confocal microscopy and fluorescence microscopy of in vivo tissue - Diffuse reflectance spectroscopy for in vivo quantification of pigments (normal and cancerous), and hemoglobin - Perpendicular-polarized flash and fluorescence photography of the skin
https://www.wikidoc.org/index.php/Photomedicine
918f2cab5f8686bb1e6c0032b4dce8be4146f984
wikidoc
Photoreceptor
Photoreceptor Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. # Overview Photoreceptor can refer to: In anatomy/cell biology: - Photoreceptor cell: a photosensitive cell, most commonly referring to a specialized type of neuron found in the retina of vertebrate eyes that is capable of phototransduction; - Ocellus (invertebrate photoreceptor): a photoreceptor organ ("simple eye") of invertebrates often comprised of a few sensory cells and a single lens; - Eyespot apparatus (microbial photoreceptor): the photoreceptor organelle of a unicellular organism that allows for phototaxis. In biochemistry: - Photoreceptor protein: a chromoprotein that responds to being exposed to a certain wavelength of light by initiating a signal transduction cascade; - Photopigment: an unstable pigment that undergoes a physical or chemical change upon absorbing a particular wavelength of light; also see: Photosynthetic pigment: molecules involved in transducing light into chemical energy. - Photosynthetic pigment: molecules involved in transducing light into chemical energy. In technology: - Photodetector or photosensor: a device that detects light by capturing photons
Photoreceptor Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us [1] to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. # Overview Photoreceptor can refer to: In anatomy/cell biology: - Photoreceptor cell: a photosensitive cell, most commonly referring to a specialized type of neuron found in the retina of vertebrate eyes that is capable of phototransduction; - Ocellus (invertebrate photoreceptor): a photoreceptor organ ("simple eye") of invertebrates often comprised of a few sensory cells and a single lens; - Eyespot apparatus (microbial photoreceptor): the photoreceptor organelle of a unicellular organism that allows for phototaxis. In biochemistry: - Photoreceptor protein: a chromoprotein that responds to being exposed to a certain wavelength of light by initiating a signal transduction cascade; - Photopigment: an unstable pigment that undergoes a physical or chemical change upon absorbing a particular wavelength of light; also see: Photosynthetic pigment: molecules involved in transducing light into chemical energy. - Photosynthetic pigment: molecules involved in transducing light into chemical energy. In technology: - Photodetector or photosensor: a device that detects light by capturing photons Template:SIB Template:WH Template:WS
https://www.wikidoc.org/index.php/Photoreceptor
fc7bd9e0a1231be940bc5a9879bcf067b2cbfa45
wikidoc
Phototoxicity
Phototoxicity Phototoxicity is a phenomenon known in live-cell, where illuminating a fluorescent molecule (the fluorescently active site is called a fluorophore) causes the selective death of the cells expressing it. # In fluorescence microscopy While not completely understood, it seems to be clear that the main cause for phototoxicity is the formation of oxygen radicals due to non-radiative energy transfer. Typically in fluorescence, photons of a certain wavelength excite electrons of the illuminated fluorophore to higher energy states. When these excited electrons return to a lower energy state, they emit a photon with a lower energy level thus causing the emission of light of a longer wavelength. This principle of fluorescence is also known as Stokes shift. Unfortunately for microscopists, in many cases some of the energy is not used for this radiative energy transfer but is transferred to oxygen causing the formation of oxygen radicals. These radicals are highly toxic to living cells, sometimes killing cells in seconds. Phototoxicity in live cells depends strongly on the kind of fluorescent molecule used. The isolation and characterization of fluorescent proteins such as green fluorescent protein (GFP) has provided biologists with fluorochromes which show a much weaker phototoxic effect compared to most smaller chemically synthesized fluorescent molecules such as FITC or rhodamine. Still, the energy level of excitation light as well as the duration of illumination must be minimized to ensure long-term survival of living cells during fluorescent imaging. # In humans ## Phototoxic substances A phototoxic substance is a chemical compound which becomes toxic only when exposed to light. - Some medicines: Tetracycline antibiotics, Methyl aminolevulinate - Some cold pressed essential oils: bergamot oil - Some plant juices: from parsley, Hogweed # Toxicology Testing 3T3 Neutral Red Phototoxicity Test - An in vitro toxicological assessment test used to determine the cytotoxic and photo(cyto)toxicity effect of a test article to murine fibroblasts in the presence or absence of UVA light. "The 3T3 Neutral Red Uptake Phototoxicity Assay (3T3 NRU PT) can be utilized to identify the phototoxic effect of a test substance induced by the combination of test substance and light and is based on the comparison of the cytotoxic effect of a test substance when tested after the exposure and in the absence of exposure to a non-cytotoxic dose of UVA/vis light. Cytotoxicity is expressed as a concentration-dependent reduction of the uptake of the vital dye - Neutral Red. Substances that are phototoxic in vivo after systemic application and distribution to the skin, as well as compounds that could act as phototoxicants after topical application to the skin can be identified by the test. The reliability and relevance of the 3T3 NRU PT have been evaluated and has been shown to be predictive when compared with acute phototoxicity effects in vivo in animals and humans." Taken with permission from This is a relatively new assay that was recently adopted by regulatory agencies such as OECD and FDA as an accepted method for the assessment of phototoxic potential of test substances. Other testing methods for the assessment of phototoxic potentional are available, such as in vitro toxiciology models are currently being developed and explored that use reconstituted tissues and new photo-sensitization testing models also exist that are used to determine how a compound may react after being applied to the skin and irradiated with solar simulated light. # Related External Links: In Vitro Phototoxicity Test ICCVAM 3T3 Neutral Red Phototoxicity Testing Page 3T3 NRU Phototoxicity Test cs:Fototoxická látka de:Phototoxie nl:Fototoxiciteit
Phototoxicity Phototoxicity is a phenomenon known in live-cell, where illuminating a fluorescent molecule (the fluorescently active site is called a fluorophore) causes the selective death of the cells expressing it. # In fluorescence microscopy While not completely understood, it seems to be clear that the main cause for phototoxicity is the formation of oxygen radicals due to non-radiative energy transfer. Typically in fluorescence, photons of a certain wavelength excite electrons of the illuminated fluorophore to higher energy states. When these excited electrons return to a lower energy state, they emit a photon with a lower energy level thus causing the emission of light of a longer wavelength. This principle of fluorescence is also known as Stokes shift. Unfortunately for microscopists, in many cases some of the energy is not used for this radiative energy transfer but is transferred to oxygen causing the formation of oxygen radicals. These radicals are highly toxic to living cells, sometimes killing cells in seconds. Phototoxicity in live cells depends strongly on the kind of fluorescent molecule used. The isolation and characterization of fluorescent proteins such as green fluorescent protein (GFP) has provided biologists with fluorochromes which show a much weaker phototoxic effect compared to most smaller chemically synthesized fluorescent molecules such as FITC or rhodamine. Still, the energy level of excitation light as well as the duration of illumination must be minimized to ensure long-term survival of living cells during fluorescent imaging. # In humans ## Phototoxic substances A phototoxic substance is a chemical compound which becomes toxic only when exposed to light. - Some medicines: Tetracycline antibiotics, Methyl aminolevulinate - Some cold pressed essential oils: bergamot oil - Some plant juices: from parsley, Hogweed # Toxicology Testing 3T3 Neutral Red Phototoxicity Test - An in vitro toxicological assessment test used to determine the cytotoxic and photo(cyto)toxicity effect of a test article to murine fibroblasts in the presence or absence of UVA light. "The 3T3 Neutral Red Uptake Phototoxicity Assay (3T3 NRU PT) can be utilized to identify the phototoxic effect of a test substance induced by the combination of test substance and light and is based on the comparison of the cytotoxic effect of a test substance when tested after the exposure and in the absence of exposure to a non-cytotoxic dose of UVA/vis light. Cytotoxicity is expressed as a concentration-dependent reduction of the uptake of the vital dye - Neutral Red. Substances that are phototoxic in vivo after systemic application and distribution to the skin, as well as compounds that could act as phototoxicants after topical application to the skin can be identified by the test. The reliability and relevance of the 3T3 NRU PT have been evaluated and has been shown to be predictive when compared with acute phototoxicity effects in vivo in animals and humans." Taken with permission from [1] This is a relatively new assay that was recently adopted by regulatory agencies such as OECD and FDA as an accepted method for the assessment of phototoxic potential of test substances. Other testing methods for the assessment of phototoxic potentional are available, such as in vitro toxiciology models are currently being developed and explored that use reconstituted tissues and new photo-sensitization testing models also exist that are used to determine how a compound may react after being applied to the skin and irradiated with solar simulated light. # Related External Links: In Vitro Phototoxicity Test ICCVAM 3T3 Neutral Red Phototoxicity Testing Page 3T3 NRU Phototoxicity Test cs:Fototoxická látka de:Phototoxie nl:Fototoxiciteit
https://www.wikidoc.org/index.php/Phototoxic
c1e5d26ed2aae464914c745b0f9e0dbcbd481cc5
wikidoc
Phrenic nerve
Phrenic nerve Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. # Overview The phrenic nerve arises from the third, fourth, and fifth cervical spinal nerves (C3-C5) in humans. It arises from the fifth, sixth and seventh cervical spinal nerves (C5-7) in most domestic animals. # Function The phrenic nerve is made up mostly of motor nerve fibres for producing contractions of the diaphragm. In addition, it provides sensory innervation for many components of the mediastinum and pleura, as well as the upper abdomen, especially the liver and gall bladder. # Path Both phrenic nerves run from C3, C4 and C5 along the anterior scalene muscle deep to the carotid sheath. - The right phrenic nerve passes over the brachiocephalic artery, posterior to the subclavian vein, and then crosses the root of the right lung and then leaves the thorax by passing through the vena cava hiatus opening in the diaphragm at the level of T8. The right phrenic nerve passes over the right atrium. - The left phrenic nerve passes over the left ventricle and pierces the diaphragm separately. Both these nerves supply motor fibres to the diaphragm and sensory fibres to the fibrous pericardium, mediastinal pleura and diaphragmatic peritoneum. The pericardiacophrenic artery and vein(s) travel with the phrenic nerve. # Clinical relevance Pain arising from structures served by the phrenic nerve is often "referred" to other somatic regions served by spinal nerves C3-C5. For example, a subphrenic abscess (beneath the diaphragm) might cause a patient to feel pain in the right shoulder. Irritation of the phrenic nerve (or the tissues supplied by it) leads to the hiccup reflex. A hiccup is a spasmodic contraction of the diaphram, which pulls air against the closed folds of the larynx. The phrenic nerve must be identified during thoracic surgery and preserved. It passes anterior to the hilum of the corresponding lung, and therefore can be identified easily. Severing the phrenic nerve will paralyse that half of the diaphragm. Breathing will be made more difficult but will continue provided the other nerve is intact. # Additional images - Transverse section of thorax, showing relations of pulmonary artery. - The arch of the aorta, and its branches. - Superficial dissection of the right side of the neck, showing the carotid and subclavian arteries. - The right brachial plexus with its short branches, viewed from in front.
Phrenic nerve Template:Infobox Nerve Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1] Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us [2] to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. # Overview The phrenic nerve arises from the third, fourth, and fifth cervical spinal nerves (C3-C5) in humans. It arises from the fifth, sixth and seventh cervical spinal nerves (C5-7) in most domestic animals. # Function The phrenic nerve is made up mostly of motor nerve fibres for producing contractions of the diaphragm. In addition, it provides sensory innervation for many components of the mediastinum and pleura, as well as the upper abdomen, especially the liver and gall bladder. # Path Both phrenic nerves run from C3, C4 and C5 along the anterior scalene muscle deep to the carotid sheath. - The right phrenic nerve passes over the brachiocephalic artery, posterior to the subclavian vein, and then crosses the root of the right lung and then leaves the thorax by passing through the vena cava hiatus opening in the diaphragm at the level of T8. The right phrenic nerve passes over the right atrium. - The left phrenic nerve passes over the left ventricle and pierces the diaphragm separately. Both these nerves supply motor fibres to the diaphragm and sensory fibres to the fibrous pericardium, mediastinal pleura and diaphragmatic peritoneum. The pericardiacophrenic artery and vein(s) travel with the phrenic nerve. # Clinical relevance Pain arising from structures served by the phrenic nerve is often "referred" to other somatic regions served by spinal nerves C3-C5. For example, a subphrenic abscess (beneath the diaphragm) might cause a patient to feel pain in the right shoulder. Irritation of the phrenic nerve (or the tissues supplied by it) leads to the hiccup reflex. A hiccup is a spasmodic contraction of the diaphram, which pulls air against the closed folds of the larynx. The phrenic nerve must be identified during thoracic surgery and preserved. It passes anterior to the hilum of the corresponding lung, and therefore can be identified easily. Severing the phrenic nerve will paralyse that half of the diaphragm. Breathing will be made more difficult but will continue provided the other nerve is intact. # Additional images - Transverse section of thorax, showing relations of pulmonary artery. - The arch of the aorta, and its branches. - Superficial dissection of the right side of the neck, showing the carotid and subclavian arteries. - The right brachial plexus with its short branches, viewed from in front. # External links - Template:GPnotebook - Template:EMedicineDictionary - Template:SUNYAnatomyFigs - "Left side of the mediastinum." - Template:SUNYAnatomyFigs - "Diagram of the cervical plexus." Template:Cervical plexus Template:SIB de:Nervus phrenicus Template:WikiDoc Sources
https://www.wikidoc.org/index.php/Phrenic
356f299b8dae819868b5e9038886cc425c6df165
wikidoc
Phytochemical
Phytochemical Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. # Overview Phytochemicals are plant or fruit-derived chemical compounds. "Phytonutrients" refer to phytochemicals or compounds that come from edible plants. # Phytochemicals as therapeutics There is abundant evidence from epidemiological studies that the phytochemicals in fruits and vegetables can significantly reduce the risk of cancer, probably due to polyphenol antioxidant and anti-inflammatory effects. Phytochemicals have been used as drugs for millennia. For example, Hippocrates in 400 BC used to prescribe willow tree leaves to abate fever. Salicin, with potent anti-inflammatory and pain-relieving properties, was originally extracted from the White Willow Tree and later synthetically produced to become the staple over the counter drug called Aspirin. The number one drug for cancer worldwide Taxol (paclitaxel), is a phytochemical initially extracted and purified from the Pacific Yew Tree. Among edible plants with health promoting phytochemicals, Diindolylmethane, from Brassica vegetables (broccoli, cauliflower, cabbage, kale, Brussels sprouts) is currently used as a treatment for Recurring Respiratory Papillomatosis tumors (caused by the Human Papilloma Virus), it is in Phase III clinical trials for Cervical Dysplasia (a precancerous condition caused by the Human Papilloma Virus) and is in clinical trials sponsored by the National Cancer Institute of the United States for a variety of cancers (breast, prostate, lung, colon, and cervical). The compound has potent anti-viral, anti-bacterial and anti-cancer properties through a variety of pathways and it has also been shown to synergize with Taxol in its anti-cancer properties, making it potentially a very important anti-cancer phytonutrient as taxol resistance is a major problem for cancer patients worldwide. Sometimes some of the compounds in plants with potent medicinal properties may not necessarily be chemicals, but may be elements, such as selenium found abundantly in Brassica vegetables with potent anti-viral and anti-cancer properties. In a human clinical trials, selenium supplementation has been shown to reduce the HIV viral load and is currently being recommended worldwide by physicians as an adjuvant nutritional supplement to AIDS treatments. It has also been shown to reduce mortality among prostate cancer patients. There are currently many other phytochemicals with potent medicinal properties that are in clinical trials for a variety of diseases. Lycopene, for example, from tomatoes is in clinical trials for cardiovascular diseases and prostate cancer. Human clinical trials have demonstrated that lycopene helps to improve blood flow through the heart and clinical studies suggest anti-cancer activity against prostate cancer. Lutein and zeaxanthin from spinach have been shown through clinical trials to directly improve human visual performance and help prevent the onset of macular degeneration and cataracts. In a landmark nutritional sciences study, scientists demonstrated that a diet rich in tomotoes and broccoli was more effective in inhibiting prostate cancer growth than a leading drug for prostate cancer. Nevertheless, following extensive evaluation of scientific and clinical evidence, the United States Food and Drug Administration has denied applications for health claims about the benefits of tomato consumption against prostate cancer, allowing only a limited statement on food product labels. It reads: "Very limited and preliminary scientific research suggests that eating one-half to one cup of tomatoes and/or tomato sauce a week may reduce the risk of prostate cancer. FDA concludes that there is little scientific evidence supporting this claim." Clinical investigations are ongoing worldwide on thousands of phytochemicals with medicinal properties. # Food processing and phytochemicals Phytochemicals in freshly harvested plant foods may be destroyed or removed by modern processing techniques, possibly including cooking. For this reason, industrially processed foods likely contain fewer phytochemicals and may thus be less beneficial than unprocessed foods. Absence or deficiency of phytochemicals in processed foods is believed to have contributed to the increased prevalence of the above-cited preventable or treatable causes of death in contemporary society. Interestingly though, lycopene, a phytochemical present in tomatoes, is concentrated in processed foods such as spaghetti sauce and ketchup, making those foods better sources of lycopene than fresh tomatoes. # List of foods high in phytonutrients Foods high in phytonutrients, or superfoods, are: ## The top 10 phytonutrient rich foods - soy – protease inhibitors, beta sitosterol, saponins, phytic acid, isoflavones - tomato – lycopene, beta carotene, vitamin C - broccoli – vitamin C, 3,3'-Diindolylmethane, sulphoraphane, lignans, selenium - garlic – thiosulphonates, limonene, quercitin - flax seeds – lignans - citrus fruits – monoterpenes, coumarin, cryptoxanthin, vitamin C, ferulic acid, oxalic acid - blueberries – tannic acid, lignans, anthocyanins - sweet potatoes – beta carotene - chili peppers – capsaicin - legumes: beans, peas, lentils – omega fatty acids, saponins, catechins, quercitin, lutein, lignans ## Other foods rich in phytonutrients or superfoods Some animal derived foods are also considered superfoods. Beginning in 2005, there has been a rapidly growing recognition of several common and exotic fruits recognized for their nutrient richness and antioxidant qualities, with over 900 new product introductions worldwide. More than a dozen industry publications on functional foods and beverages have referred to various exotic or antioxidant species as superfruits (see References), some of which are shown in the list below. - Apples – quercetin, catechins, tartaric acid - Açaí berries – dietary fiber, anthocyanins, omega-3, omega-6, Beta-sitosterol. Açaí is the highest scoring plant food (spices excepted) for antioxidant ORAC value - Dried apricots - Artichoke – silymarin, caffeic acid, ferulic acid - Brassicates: kale, cabbage, brussels sprouts, cauliflower – lutein - Carrots – beta-carotene - Cocoa – flavonoids, epicatechin - Cranberries – ellagic acid, anthocyanins - Eggplant - Gac – beta-carotene, lycopene - Goji (Wolfberry) - ellagic acid, β-carotene, β-cryptoxanthin, zeaxanthin, lutein, lycopene - Pink grapefruit – lycopene - Red grapes and wine – quercitin, resveratrol, catechins, ellagic acid - Green tea – quercitin, catechins, oxalic acid - Mangos – cryptoxanthin - Mangosteen - xanthones - Nuts and seeds – resveratrol, phytic acid, phytosterols, protease inhibitors - Porridge oats soluble fibre magnesium, zinc - Okra -- beta carotene, lutein, zeaxanthin - Olive oil – Monounsaturated fat - Onions – quercitin, thiosulphonates - Papaya – cryptoxanthin - Bell peppers – Beta-carotene, vitamin C - Pomegranate - Vitamin C, Tannins, especially Punicalagins - Pumpkin – lignans, carotenes - Quinoa Dietary fiber, protein without gluten with balanced essential amino acids - Sesame - Lignans - Shitake mushrooms - Spinach – oxalic acid, lutein, zeaxanthin - Squash - Watermelon – lycopene zeaxanthin, sulphoraphane, indole-3-carbinol - Low fat yoghurt calcium - Spirulina - beta-carotene
Phytochemical Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us [5] to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. # Overview Phytochemicals are plant or fruit-derived chemical compounds. "Phytonutrients" refer to phytochemicals or compounds that come from edible plants. # Phytochemicals as therapeutics There is abundant evidence from epidemiological studies that the phytochemicals in fruits and vegetables can significantly reduce the risk of cancer, probably due to polyphenol antioxidant and anti-inflammatory effects. Phytochemicals have been used as drugs for millennia. For example, Hippocrates in 400 BC used to prescribe willow tree leaves to abate fever. Salicin, with potent anti-inflammatory and pain-relieving properties, was originally extracted from the White Willow Tree and later synthetically produced to become the staple over the counter drug called Aspirin. The number one drug for cancer worldwide Taxol (paclitaxel), is a phytochemical initially extracted and purified from the Pacific Yew Tree. Among edible plants with health promoting phytochemicals, Diindolylmethane, from Brassica vegetables (broccoli, cauliflower, cabbage, kale, Brussels sprouts) is currently used as a treatment for Recurring Respiratory Papillomatosis tumors (caused by the Human Papilloma Virus), it is in Phase III clinical trials for Cervical Dysplasia (a precancerous condition caused by the Human Papilloma Virus) and is in clinical trials sponsored by the National Cancer Institute of the United States for a variety of cancers (breast, prostate, lung, colon, and cervical). The compound has potent anti-viral, anti-bacterial and anti-cancer properties through a variety of pathways and it has also been shown to synergize with Taxol in its anti-cancer properties, making it potentially a very important anti-cancer phytonutrient as taxol resistance is a major problem for cancer patients worldwide. Sometimes some of the compounds in plants with potent medicinal properties may not necessarily be chemicals, but may be elements, such as selenium found abundantly in Brassica vegetables with potent anti-viral and anti-cancer properties. In a human clinical trials, selenium supplementation has been shown to reduce the HIV viral load and is currently being recommended worldwide by physicians as an adjuvant nutritional supplement to AIDS treatments. It has also been shown to reduce mortality among prostate cancer patients. There are currently many other phytochemicals with potent medicinal properties that are in clinical trials for a variety of diseases. Lycopene, for example, from tomatoes is in clinical trials for cardiovascular diseases and prostate cancer. Human clinical trials have demonstrated that lycopene helps to improve blood flow through the heart and clinical studies suggest anti-cancer activity against prostate cancer. Lutein and zeaxanthin from spinach have been shown through clinical trials to directly improve human visual performance and help prevent the onset of macular degeneration and cataracts. In a landmark nutritional sciences study, scientists demonstrated that a diet rich in tomotoes and broccoli was more effective in inhibiting prostate cancer growth than a leading drug for prostate cancer. Nevertheless, following extensive evaluation of scientific and clinical evidence, the United States Food and Drug Administration has denied applications for health claims about the benefits of tomato consumption against prostate cancer, allowing only a limited statement on food product labels[1]. It reads: "Very limited and preliminary scientific research suggests that eating one-half to one cup of tomatoes and/or tomato sauce a week may reduce the risk of prostate cancer. FDA concludes that there is little scientific evidence supporting this claim." Clinical investigations are ongoing worldwide on thousands of phytochemicals with medicinal properties. # Food processing and phytochemicals Phytochemicals in freshly harvested plant foods may be destroyed or removed by modern processing techniques, possibly including cooking[2][3]. For this reason, industrially processed foods likely contain fewer phytochemicals and may thus be less beneficial than unprocessed foods. Absence or deficiency of phytochemicals in processed foods is believed to have contributed to the increased prevalence of the above-cited preventable or treatable causes of death in contemporary society. Interestingly though, lycopene, a phytochemical present in tomatoes, is concentrated in processed foods such as spaghetti sauce and ketchup, making those foods better sources of lycopene than fresh tomatoes. # List of foods high in phytonutrients Foods high in phytonutrients, or superfoods[4], are: ## The top 10 phytonutrient rich foods - soy – protease inhibitors, beta sitosterol, saponins, phytic acid, isoflavones - tomato – lycopene, beta carotene, vitamin C - broccoli – vitamin C, 3,3'-Diindolylmethane, sulphoraphane, lignans, selenium - garlic – thiosulphonates, limonene, quercitin - flax seeds – lignans - citrus fruits – monoterpenes, coumarin, cryptoxanthin, vitamin C, ferulic acid, oxalic acid - blueberries – tannic acid, lignans, anthocyanins - sweet potatoes – beta carotene - chili peppers – capsaicin - legumes: beans, peas, lentils – omega fatty acids, saponins, catechins, quercitin, lutein, lignans ## Other foods rich in phytonutrients or superfoods Some animal derived foods are also considered superfoods. Beginning in 2005, there has been a rapidly growing recognition of several common and exotic fruits recognized for their nutrient richness and antioxidant qualities, with over 900 new product introductions worldwide[5]. More than a dozen industry publications on functional foods and beverages have referred to various exotic or antioxidant species as superfruits (see References[6]), some of which are shown in the list below. - Apples – quercetin, catechins, tartaric acid - Açaí berries – dietary fiber, anthocyanins, omega-3, omega-6, Beta-sitosterol. Açaí is the highest scoring plant food (spices excepted) for antioxidant ORAC value[6] - Dried apricots - Artichoke – silymarin, caffeic acid, ferulic acid - Brassicates: kale, cabbage, brussels sprouts, cauliflower – lutein - Carrots – beta-carotene - Cocoa – flavonoids, epicatechin - Cranberries – ellagic acid, anthocyanins - Eggplant - Gac – beta-carotene, lycopene - Goji (Wolfberry) - ellagic acid, β-carotene, β-cryptoxanthin, zeaxanthin, lutein, lycopene - Pink grapefruit – lycopene - Red grapes and wine – quercitin, resveratrol, catechins, ellagic acid - Green tea – quercitin, catechins, oxalic acid - Mangos – cryptoxanthin - Mangosteen - xanthones - Nuts and seeds – resveratrol, phytic acid, phytosterols, protease inhibitors - Porridge oats soluble fibre magnesium, zinc - Okra -- beta carotene, lutein, zeaxanthin - Olive oil – Monounsaturated fat - Onions – quercitin, thiosulphonates - Papaya – cryptoxanthin - Bell peppers – Beta-carotene, vitamin C - Pomegranate - Vitamin C, Tannins, especially Punicalagins - Pumpkin – lignans, carotenes - Quinoa Dietary fiber, protein without gluten with balanced essential amino acids - Sesame - Lignans - Shitake mushrooms - Spinach – oxalic acid, lutein, zeaxanthin - Squash - Watermelon – lycopene zeaxanthin, sulphoraphane, indole-3-carbinol - Low fat yoghurt calcium - Spirulina - beta-carotene
https://www.wikidoc.org/index.php/Phytochemical
42b77ca71562f99eb80cf209b7856a14c6fdc163
wikidoc
Pierre Potain
Pierre Potain Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. Pierre Charles Édouard Potain (July 19, 1825 - January, 5 1901) was a French cardiologist who was born in Paris. For much of his career he was associated with Necker Hospital in Paris. He was an assistant to Jean-Baptiste Bouillaud (1796-1881), and regarded Bouillaud as a major influence in his study of cardiology. Potain made several contributions involving cardiovascular disease and his testing of cardiac-related matters. Some of the tests included analysis of jugular venous waves, heart gallop rhythm research, blood pressure testing and auscultatory analysis. In 1889 he was credited for making modifications to the sphygmomanometer, a device used to measure blood pressure that had been recently invented by Samuel Siegfried Carl von Basch (1837-1905). The term Potain's sign is an extension of percussion dullness over the aortic arch from the manubrium to the third costal cartilage on the right-hand side of the body. Potain's name is also associated with several other eponymous medical terms; the following terms are rarely used today and are for historical purposes only. - Potain's disease: pulmonary edema - Potain's solution: diluent used in a procedure to count red blood cells - Potain's syndrome: dyspepsia with expansion of the right ventricle, and an increase of pulmonary auscultation. # Written works - Des lésions des ganglions lymphatiques viscéraux. Paris, Remquet, 1860. - De la Succession des mouvements du coeur, réfutation des opinions de M. Beau, leçon faite à l'Hôtel-Dieu. Paris: impr. de H. Plon, 1863 - Note sur les dédoublements normaux des bruits du coeur, présentée à la Société médicale des hôpitaux, dans la séance du 22 juin 1866, par le Dr Potain,. Paris: impr. de F. Malteste, 1866. - Des mouvements et des bruits qui se passent dans les veines jugulaires. Bull. Soc. Méd. Hôp. Paris (Mémoires), 1867, 2 sér., 4, 3-27. - Du Rhythme cardiaque appelé bruit de galop, de son mécanisme et de sa valeur séméiologique, note présentée à la Société médicale des hôpitaux de Paris Paris: A. Delahaye, 1876. También en : Bull. Soc. Méd. Hôp. Paris (Mémoires), (1875), 1876, 12, 137-66. - Des Fluxions pleuro-pulmonaires réflexes d'origine utéro-ovarienne. Paris: impr. de Chaix, 1884. - Du sphygmomanomètre et de la mesure de la pression artérielle chez l'homme à l'état normale et pathologique. Arch. Physiol. Nom. Path., 5 sér., 1, 556-69. - Dernière leçon de M. le professeur Potain. Paris: impr. de J. Gainche, 1900.* - La Pression artérielle de l'homme à l'état normal et pathologique Paris: Masson, 1902. # Reference - Google-translated article from Historiadelamedicina.org de:Pierre Potain
Pierre Potain Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1] Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us [2] to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. Pierre Charles Édouard Potain (July 19, 1825 - January, 5 1901) was a French cardiologist who was born in Paris. For much of his career he was associated with Necker Hospital in Paris. He was an assistant to Jean-Baptiste Bouillaud (1796-1881), and regarded Bouillaud as a major influence in his study of cardiology. Potain made several contributions involving cardiovascular disease and his testing of cardiac-related matters. Some of the tests included analysis of jugular venous waves, heart gallop rhythm research, blood pressure testing and auscultatory analysis. In 1889 he was credited for making modifications to the sphygmomanometer, a device used to measure blood pressure that had been recently invented by Samuel Siegfried Carl von Basch (1837-1905). The term Potain's sign is an extension of percussion dullness over the aortic arch from the manubrium to the third costal cartilage on the right-hand side of the body. Potain's name is also associated with several other eponymous medical terms; the following terms are rarely used today and are for historical purposes only. - Potain's disease: pulmonary edema - Potain's solution: diluent used in a procedure to count red blood cells - Potain's syndrome: dyspepsia with expansion of the right ventricle, and an increase of pulmonary auscultation. # Written works - Des lésions des ganglions lymphatiques viscéraux. Paris, Remquet, 1860. - De la Succession des mouvements du coeur, réfutation des opinions de M. Beau, leçon faite à l'Hôtel-Dieu. Paris: impr. de H. Plon, 1863 - Note sur les dédoublements normaux des bruits du coeur, présentée à la Société médicale des hôpitaux, dans la séance du 22 juin 1866, par le Dr Potain,. Paris: impr. de F. Malteste, 1866. - Des mouvements et des bruits qui se passent dans les veines jugulaires. Bull. Soc. Méd. Hôp. Paris (Mémoires), 1867, 2 sér., 4, 3-27. - Du Rhythme cardiaque appelé bruit de galop, de son mécanisme et de sa valeur séméiologique, note présentée à la Société médicale des hôpitaux de Paris Paris: A. Delahaye, 1876. También en : Bull. Soc. Méd. Hôp. Paris (Mémoires), (1875), 1876, 12, 137-66. - Des Fluxions pleuro-pulmonaires réflexes d'origine utéro-ovarienne. Paris: impr. de Chaix, 1884. - Du sphygmomanomètre et de la mesure de la pression artérielle chez l'homme à l'état normale et pathologique. Arch. Physiol. Nom. Path., 5 sér., 1, 556-69. - Dernière leçon de M. le professeur Potain. Paris: impr. de J. Gainche, 1900.* - La Pression artérielle de l'homme à l'état normal et pathologique Paris: Masson, 1902. # Reference - Google-translated article from Historiadelamedicina.org de:Pierre Potain Template:WikiDoc Sources
https://www.wikidoc.org/index.php/Pierre_Potain
a60c0a35578367dd6eab63a6400c8127a76024bf
wikidoc
Pili annulati
Pili annulati Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. # Overview Pili annulati (also known as "Ringed hair") is a peculiar disease in which the hair seems banded by alternating segments of light and dark color when seen in reflected light.:767:640
Pili annulati Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us [1] to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. # Overview Pili annulati (also known as "Ringed hair") is a peculiar disease in which the hair seems banded by alternating segments of light and dark color when seen in reflected light.[1]:767[2]:640
https://www.wikidoc.org/index.php/Pili_annulati
9b7b3026f05bb2f9a5bf708a56d0c893f001039e
wikidoc
Pilomatricoma
Pilomatricoma # Overview Pilomatricoma, also known as a calcifying epithelioma of Malherbe, Malherbe calcifying epithelioma, and Pilomatrixoma, is a benign skin tumor derived from the hair matrix.:670 # Histologic features Pilomatricomas consist of anucleate squamous cells (called "ghost cells"), benign viable squamous cells and multi-nucleated giant cells. The presence of calcifications is common. # Pathogenesis Pilomatricoma is associated with high levels of beta-catenin caused by either a mutation in the APC gene or beta-catenin gene. These high levels of beta-catenin can aid cell proliferation, inhibit cell death, and ultimately lead to cancer. # Diagnosis ## Physical Examination ### Skin - Pilomatricoma. With permission from Dermatology Atlas. - Pilomatricoma. With permission from Dermatology Atlas. - Pilomatricoma. With permission from Dermatology Atlas. - Perforanting pilomatricoma. With permission from Dermatology Atlas. - Perforanting pilomatricoma. With permission from Dermatology Atlas. - Perforanting pilomatricoma. With permission from Dermatology Atlas. - Perforanting pilomatricoma. With permission from Dermatology Atlas.
Pilomatricoma Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]; Associate Editor(s)-in-Chief: Kiran Singh, M.D. [2] # Overview Pilomatricoma, also known as a calcifying epithelioma of Malherbe,[1] Malherbe calcifying epithelioma, and Pilomatrixoma, is a benign skin tumor derived from the hair matrix.[2]:670[3] # Histologic features Pilomatricomas consist of anucleate squamous cells (called "ghost cells"), benign viable squamous cells and multi-nucleated giant cells. The presence of calcifications is common. # Pathogenesis Pilomatricoma is associated with high levels of beta-catenin caused by either a mutation in the APC gene or beta-catenin gene. These high levels of beta-catenin can aid cell proliferation, inhibit cell death, and ultimately lead to cancer.[4] # Diagnosis ## Physical Examination ### Skin - Pilomatricoma. With permission from Dermatology Atlas.[5] - Pilomatricoma. With permission from Dermatology Atlas.[5] - Pilomatricoma. With permission from Dermatology Atlas.[5] - Perforanting pilomatricoma. With permission from Dermatology Atlas.[5] - Perforanting pilomatricoma. With permission from Dermatology Atlas.[5] - Perforanting pilomatricoma. With permission from Dermatology Atlas.[5] - Perforanting pilomatricoma. With permission from Dermatology Atlas.[5]
https://www.wikidoc.org/index.php/Pilomatricoma
4e06b680feeb2286dc6fd6059099291d6934894a
wikidoc
Pisiform bone
Pisiform bone Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. # Overview The pisiform bone (also called pisiform or lentiform bone) is a small knobbly, pea-shaped wrist bone. The pisiform bone is found in the proximal row of the carpus. It is located where the ulna (inner bone of the forearm) joins the carpus (wrist). It articulates only with the triquetral. It is a sesamoid bone. The pisiform bone may be known by its small size, and by its presenting a single articular facet. It is situated on a plane anterior to the other carpal bones and is spheroidal in form. The etymology derives from the Latin pīsum which means "pea." # Surfaces Its dorsal surface presents a smooth, oval facet, for articulation with the triangular: this facet approaches the superior, but not the inferior border of the bone. The volar surface is rounded and rough, and gives attachment to the transverse carpal ligament, and to the Flexor carpi ulnaris and Abductor digiti quinti. The lateral and medial surfaces are also rough, the former being concave, the latter usually convex.
Pisiform bone Template:Infobox Bone Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us [1] to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. # Overview The pisiform bone (also called pisiform or lentiform bone) is a small knobbly, pea-shaped wrist bone. The pisiform bone is found in the proximal row of the carpus. It is located where the ulna (inner bone of the forearm) joins the carpus (wrist). It articulates only with the triquetral. It is a sesamoid bone. The pisiform bone may be known by its small size, and by its presenting a single articular facet. It is situated on a plane anterior to the other carpal bones and is spheroidal in form. The etymology derives from the Latin pīsum which means "pea." # Surfaces Its dorsal surface presents a smooth, oval facet, for articulation with the triangular: this facet approaches the superior, but not the inferior border of the bone. The volar surface is rounded and rough, and gives attachment to the transverse carpal ligament, and to the Flexor carpi ulnaris and Abductor digiti quinti. The lateral and medial surfaces are also rough, the former being concave, the latter usually convex.
https://www.wikidoc.org/index.php/Pisiform
5a9d2f745b6fd00319f7ff48d7aef73c81f13438
wikidoc
Pivmecillinam
Pivmecillinam # Overview Pivmecillinam (INN) or amdinocillin pivoxil (USAN, trade names Selexid, Penomax and Coactabs) is an orally active prodrug of mecillinam, an extended-spectrum penicillin antibiotic. Pivmecillinam is the pivaloyloxymethyl ester of mecillinam. Neither drug is available in the United States. Pivmecillinam is only considered to be active against Gram-negative bacteria, and is used primarily in the treatment of lower urinary tract infections. In the Nordic countries, it has been widely used in that indication since the 1970s. It has been proposed as the first-line drug of choice for empirical treatment of acute cystitis. It has also been used to treat paratyphoid fever. # Adverse effects The adverse effect profile of pivmecillinam is similar to that of other penicillins. The most common side effects of mecillinam use are rash and gastrointestinal upset, including nausea and vomiting. Prodrugs that release pivalic acid when broken down by the body — such as pivmecillinam, pivampicillin and cefditoren pivoxil — have long been known to deplete levels of carnitine. This is not due to the drug itself, but to pivalate, which is mostly removed from the body by forming a conjugate with carnitine. Although short-term use of these drugs can cause a marked decrease in blood levels of carnitine, it is unlikely to be of clinical significance; long-term use, however, appears problematic and is not recommended.
Pivmecillinam Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1] # Overview Pivmecillinam (INN) or amdinocillin pivoxil (USAN, trade names Selexid, Penomax and Coactabs) is an orally active prodrug of mecillinam, an extended-spectrum penicillin antibiotic. Pivmecillinam is the pivaloyloxymethyl ester of mecillinam. Neither drug is available in the United States.[2] Pivmecillinam is only considered to be active against Gram-negative bacteria, and is used primarily in the treatment of lower urinary tract infections. In the Nordic countries, it has been widely used in that indication since the 1970s. It has been proposed as the first-line drug of choice for empirical treatment of acute cystitis.[1][3] It has also been used to treat paratyphoid fever.[4] # Adverse effects The adverse effect profile of pivmecillinam is similar to that of other penicillins. The most common side effects of mecillinam use are rash and gastrointestinal upset, including nausea and vomiting.[1][5] Prodrugs that release pivalic acid when broken down by the body — such as pivmecillinam, pivampicillin and cefditoren pivoxil — have long been known to deplete levels of carnitine.[6][7] This is not due to the drug itself, but to pivalate, which is mostly removed from the body by forming a conjugate with carnitine. Although short-term use of these drugs can cause a marked decrease in blood levels of carnitine,[8] it is unlikely to be of clinical significance;[7] long-term use, however, appears problematic and is not recommended.[7][9][10]
https://www.wikidoc.org/index.php/Pivmecillinam
07ed4456714638aa6ff06f1bc67ae3c9b0c8e84f
wikidoc
Plakophilin-2
Plakophilin-2 Plakophilin-2 is a protein that in humans is encoded by the PKP2 gene. Plakophilin 2 is expressed in skin and cardiac muscle, where it functions to link cadherins to intermediate filaments in the cytoskeleton. In cardiac muscle, plakophilin-2 is found in desmosome structures located within intercalated discs. Mutations in PKP2 have been shown to be causal in arrhythmogenic right ventricular cardiomyopathy. # Structure Two splice variants of the PKP2 gene have been identified. The first has a molecular weight of 97.4 kDa (881 amino acids) and the second of molecular weight of 92.7 kDa (837 amino acids). A processed pseudogene with high similarity to this locus has been mapped to chromosome 12p13. Plakophilin-2 is a member of the armadillo repeat and plakophilin protein family. Plakophilin proteins contain nine central, conserved armadillo repeat domains flanked by N-terminal and C-terminal domains. Alternately spliced transcripts encoding protein isoforms have been identified. Plakophilin 2 localizes to cell desmosomes and nuclei and binds plakoglobin, desmoplakin, and the desmosomal cadherins via N-terminal head domain. # Function Plakophilin 2 functions to link cadherins to intermediate filaments in the cytoskeleton. In cardiomyocytes, plakophilin-2 is found at desmosome structures within intercalated discs, which link adjacent sarcolemmal membranes together. The desmosomal protein, desmoplakin, is the core constituent of the plaque which anchors intermediate filaments to the sarcolemma by its C-terminus and indirectly to sarcolemmal cadherins by its N-terminus, facilitated by plakoglobin and plakophilin-2. Plakophilin is necessary for normal localization and content of desmoplakin to desmosomes, which may in part be due the recruitment of protein kinase C alpha to desmoplakin. Ablation of PKP2 in mice severely disrupts normal heart morphogenesis. Mutant mice are embryonic lethal and exhibit deficits in the formation of adhering junctions in cardiomyocytes, including the dissociation of desmoplakin and formation of cytoplasmic granular aggregates around embryonic day 10.5-11. Additional malformation included reduced trabeculation, cytoskeletal dissaray and cardiac wall rupture. Further studies demonstrated that plakophilin-2 coordinate with E-cadherin is required to properly localize RhoA early in actin cytoskeletal rearrangement in order to properly couple the assembly of adherens junctions to the translocation of desmosome precursors in newly formed cell-cell junctions. Plakophilin-2 over time has shown to be more than components of cell-cell junctions; rather the plakophilins are emerging as versatile scaffolds for various signaling pathways that more globally modulate diverse cellular activities. Plakophilin-2 has shown to localize to nuclei, in addition to desmosomal plaques in the cytoplasm. Studies have shown that plakkophillin-2 is found in the nucleoplasm, complexed in the RNA polymerase III holoenzyme with the largest subunit of RNA polymerase III, termed RPC155. There are data to support molecular crosstalk between plakophilin-2 and proteins involved in mechanical junctions in cardiomyocytes, including connexin 43, the major component of cardiac gap junctions; the voltage-gated sodium channel Na(V)1.5 and its interacting subunit, ankyrin G; and the K(ATP). Decreased expression of plakophilin-2 via siRNA leads to a decrease in and redistribution of connexin 43 protein, as well as a decrease in coupling of adjacent cardiomyocytes. Studies also showed that GJA1 and plakophilin-2 are components in the same biomolecular complex. Plakophilin-2 also associates with Na(V)1.5, and knockdown of plakophilin-2 in cardiomyocytes alters sodium current properties as well as velocity of action potential propagation. It has also been demonstrated that plakophilin-2 associates with an important component of the Na(V)1.5 complex, ankyrin G, and loss of ankyrin G via siRNA downregulation mislocalized plakophilin-2 and connexin 43 in cardiac cells, which was coordinate with decreased electrical coupling of cells and decreased adhesion strength. These studies were further supported by an investigation in a mouse model harboring a PKP2-heterozygous null mutation, which showed decreased Na(V)1.5 amplitude, as well as a shift in gating and kinetics; pharmacological challenge also induced ventricular arrhythmias. These findings further support the notion that desmosomes crosstalk with sodium channels in the heart, and suggest that the risk of arrhythmias in patients with PKP2 mutations may be unveiled with pharmacological challenge. Evidence has also shown that plakophilin-2 binds to the K(ATP) channel subunit, Kir6.2, and that in cardiomyocytes from haploinsufficient PKP2 mice, K(ATP) channel current density was ∼40% smaller and regional heterogeneity of K(ATP) channels was altered, suggesting that plakophilin-2 interacts with K(ATP) and mediates crosstalk between intercellular junctions and membrane excitability. # Clinical significance Mutations in PKP2 have been associated with, have been shown to cause, and are considered common in arrhythmogenic right ventricular cardiomyopathy, which is characterized by fibrofatty replacement of cardiomyocytes, ventricular tachycardia and sudden cardiac death. It is estimated that 70% of all mutations associated with arrhythmogenic right ventricular cardiomyopathy are within the PKP2 gene. These mutations in general appear to disrupt the assembly and stability of desmosomes. Mechanistic studies have shown that certain PKP2 mutations result in instability of the plakophilin-2 protein due to enhanced calpain-mediated degradation. Specific and sensitive markers of PKP2 and plakoglobin mutation carriers in arrhythmogenic right ventricular cardiomyopathy have been identified to include T-wave inversions, right ventricular wall motion abnormalities, and ventricular extrasystoles. Additionally, immunohistochemical analysis of proteins comprising cardiomyocyte desmosomes has shown to be a highly sensitive and specific diagnostic indicator. Clinical and genetic characterization of arrhythmogenic right ventricular cardiomyopathy is currently under intense investigation to understand the penetrance associated with PKP2 mutations, as well as other genes encoding desmosomal proteins, in disease progression and outcome. PKP2 mutations were also found to coexist with sodium channelopathies in patients with Brugada syndrome. Additionally, plakophilin-2 was found in adherens junctions of cardiac myxomata tumors analyzed, and absent in patients with noncardiac myxomata, suggesting that plakophilin-2 may serve as a valuable marker in the clinical diagnosis of cardiac myxomata. # Interactions PKP2 has been shown to interact with: - ankyrin G, - beta catenin, - desmocollin 1, - desmocollin 2, - desmoglein 1, - desmoglein 2, - desmoplakin, - connexin 43, - plakoglobin, - Kir6.2, and - SCN5A.
Plakophilin-2 Plakophilin-2 is a protein that in humans is encoded by the PKP2 gene.[1][2] Plakophilin 2 is expressed in skin and cardiac muscle, where it functions to link cadherins to intermediate filaments in the cytoskeleton. In cardiac muscle, plakophilin-2 is found in desmosome structures located within intercalated discs. Mutations in PKP2 have been shown to be causal in arrhythmogenic right ventricular cardiomyopathy. # Structure Two splice variants of the PKP2 gene have been identified. The first has a molecular weight of 97.4 kDa (881 amino acids) and the second of molecular weight of 92.7 kDa (837 amino acids).[3][4] A processed pseudogene with high similarity to this locus has been mapped to chromosome 12p13.[2] Plakophilin-2 is a member of the armadillo repeat and plakophilin protein family. Plakophilin proteins contain nine central, conserved armadillo repeat domains flanked by N-terminal and C-terminal domains.[5] Alternately spliced transcripts encoding protein isoforms have been identified.[6] Plakophilin 2 localizes to cell desmosomes and nuclei and binds plakoglobin, desmoplakin, and the desmosomal cadherins via N-terminal head domain.[7][8] # Function Plakophilin 2 functions to link cadherins to intermediate filaments in the cytoskeleton. In cardiomyocytes, plakophilin-2 is found at desmosome structures within intercalated discs, which link adjacent sarcolemmal membranes together.[9] The desmosomal protein, desmoplakin, is the core constituent of the plaque which anchors intermediate filaments to the sarcolemma by its C-terminus and indirectly to sarcolemmal cadherins by its N-terminus, facilitated by plakoglobin and plakophilin-2.[10] Plakophilin is necessary for normal localization and content of desmoplakin to desmosomes, which may in part be due the recruitment of protein kinase C alpha to desmoplakin.[11] Ablation of PKP2 in mice severely disrupts normal heart morphogenesis. Mutant mice are embryonic lethal and exhibit deficits in the formation of adhering junctions in cardiomyocytes, including the dissociation of desmoplakin and formation of cytoplasmic granular aggregates around embryonic day 10.5-11. Additional malformation included reduced trabeculation, cytoskeletal dissaray and cardiac wall rupture.[12] Further studies demonstrated that plakophilin-2 coordinate with E-cadherin is required to properly localize RhoA early in actin cytoskeletal rearrangement in order to properly couple the assembly of adherens junctions to the translocation of desmosome precursors in newly formed cell-cell junctions.[13] Plakophilin-2 over time has shown to be more than components of cell-cell junctions; rather the plakophilins are emerging as versatile scaffolds for various signaling pathways that more globally modulate diverse cellular activities.[5] Plakophilin-2 has shown to localize to nuclei, in addition to desmosomal plaques in the cytoplasm. Studies have shown that plakkophillin-2 is found in the nucleoplasm, complexed in the RNA polymerase III holoenzyme with the largest subunit of RNA polymerase III, termed RPC155.[7] There are data to support molecular crosstalk between plakophilin-2 and proteins involved in mechanical junctions in cardiomyocytes, including connexin 43, the major component of cardiac gap junctions; the voltage-gated sodium channel Na(V)1.5 and its interacting subunit, ankyrin G; and the K(ATP). Decreased expression of plakophilin-2 via siRNA leads to a decrease in and redistribution of connexin 43 protein, as well as a decrease in coupling of adjacent cardiomyocytes. Studies also showed that GJA1 and plakophilin-2 are components in the same biomolecular complex.[14] Plakophilin-2 also associates with Na(V)1.5, and knockdown of plakophilin-2 in cardiomyocytes alters sodium current properties as well as velocity of action potential propagation.[15] It has also been demonstrated that plakophilin-2 associates with an important component of the Na(V)1.5 complex, ankyrin G, and loss of ankyrin G via siRNA downregulation mislocalized plakophilin-2 and connexin 43 in cardiac cells, which was coordinate with decreased electrical coupling of cells and decreased adhesion strength.[16] These studies were further supported by an investigation in a mouse model harboring a PKP2-heterozygous null mutation, which showed decreased Na(V)1.5 amplitude, as well as a shift in gating and kinetics; pharmacological challenge also induced ventricular arrhythmias. These findings further support the notion that desmosomes crosstalk with sodium channels in the heart, and suggest that the risk of arrhythmias in patients with PKP2 mutations may be unveiled with pharmacological challenge.[17] Evidence has also shown that plakophilin-2 binds to the K(ATP) channel subunit, Kir6.2, and that in cardiomyocytes from haploinsufficient PKP2 mice, K(ATP) channel current density was ∼40% smaller and regional heterogeneity of K(ATP) channels was altered, suggesting that plakophilin-2 interacts with K(ATP) and mediates crosstalk between intercellular junctions and membrane excitability.[18] # Clinical significance Mutations in PKP2 have been associated with, have been shown to cause, and are considered common in arrhythmogenic right ventricular cardiomyopathy, which is characterized by fibrofatty replacement of cardiomyocytes, ventricular tachycardia and sudden cardiac death.[19][20][21][22][23][24][25][26] It is estimated that 70% of all mutations associated with arrhythmogenic right ventricular cardiomyopathy are within the PKP2 gene.[27] These mutations in general appear to disrupt the assembly and stability of desmosomes.[28] Mechanistic studies have shown that certain PKP2 mutations result in instability of the plakophilin-2 protein due to enhanced calpain-mediated degradation.[29] Specific and sensitive markers of PKP2 and plakoglobin mutation carriers in arrhythmogenic right ventricular cardiomyopathy have been identified to include T-wave inversions, right ventricular wall motion abnormalities, and ventricular extrasystoles.[30] Additionally, immunohistochemical analysis of proteins comprising cardiomyocyte desmosomes has shown to be a highly sensitive and specific diagnostic indicator.[31] Clinical and genetic characterization of arrhythmogenic right ventricular cardiomyopathy is currently under intense investigation to understand the penetrance associated with PKP2 mutations, as well as other genes encoding desmosomal proteins, in disease progression and outcome.[6][32][33][34][35][36][37][38][39][40][41] PKP2 mutations were also found to coexist with sodium channelopathies in patients with Brugada syndrome.[42][43] Additionally, plakophilin-2 was found in adherens junctions of cardiac myxomata tumors analyzed, and absent in patients with noncardiac myxomata, suggesting that plakophilin-2 may serve as a valuable marker in the clinical diagnosis of cardiac myxomata.[44] # Interactions PKP2 has been shown to interact with: - ankyrin G,[16] - beta catenin,[7] - desmocollin 1,[7] - desmocollin 2,[7] - desmoglein 1,[7] - desmoglein 2,[7] - desmoplakin,[7] - connexin 43,[45] - plakoglobin,[7] - Kir6.2,[18] and - SCN5A.[15]
https://www.wikidoc.org/index.php/Plakophilin-2
54aaab04f2280a93cd8eeeba6a7a906787c18a8e
wikidoc
Plant cuticle
Plant cuticle Plant cuticles are a protective waxy covering produced only by the epidermal cells of leaves, young shoots and all other aerial plant organs without periderm. The cuticle tends to be thicker on the top of the leaf, but is not always thicker in xerophytic plants living in dry climates than in mesophytic plants from wetter climates, despite a persistent myth to that effect. The cuticle is composed of an insoluble cuticular membrane impregnated by and covered with soluble waxes. Cutin, a polyester polymer composed of inter-esterified straight-chain hydroxy acids which are cross-linked by ester and epoxide bonds, is the best-known structural component of the cuticular membrane. The cuticle can also contain a non-saponifiable hydrocarbon polymer known as Cutan. The cuticular membrane is impregnated with cuticular waxes and covered with epicuticular waxes, which are mixtures of hydrophobic aliphatic compounds, hydrocarbons with chain lengths typically in the range C16 to C36. The plant cuticle is one of a series of innovations, together with stomata, xylem and phloem and intercellular spaces in stem and later leaf mesophyll tissue, that plants evolved more than 450 million years ago during the transition between life in water and life on land. Together, these features enabled plant shoots exploring aerial environments to conserve water by internalising the gas exchange surfaces, enclosing them in a waterproof membrane and providing a variable-aperture control mechanism, the stomatal guard cells, which regulate the rates of transpiration and CO2 exchange. In addition to its function as a permeability barrier for water and other molecules, the micro and nano-structure of the cuticle confer specialised surface properties that prevent contamination of plant tissues with external water, dirt and microorganisms. Many plants, such as the leaves of the sacred lotus (Nelumbo nucifera) exhibit ultra-hydrophobic and self-cleaning properties that have been described by Barthlott and Neinhuis (1997). The lotus effect has potential uses in biomimetic technical materials. "The waxy sheet of cuticle also functions in defense, forming a physical barrier that resists penetration by virus particles, bacterial cells, and the spores or growing filaments of fungi".
Plant cuticle Plant cuticles are a protective waxy covering produced only by the epidermal cells [1] of leaves, young shoots and all other aerial plant organs without periderm. The cuticle tends to be thicker on the top of the leaf, but is not always thicker in xerophytic plants living in dry climates than in mesophytic plants from wetter climates, despite a persistent myth to that effect. The cuticle is composed of an insoluble cuticular membrane impregnated by and covered with soluble waxes. Cutin, a polyester polymer composed of inter-esterified straight-chain hydroxy acids which are cross-linked by ester and epoxide bonds, is the best-known structural component of the cuticular membrane.[2][3] The cuticle can also contain a non-saponifiable hydrocarbon polymer known as Cutan.[4] The cuticular membrane is impregnated with cuticular waxes[5] and covered with epicuticular waxes, which are mixtures of hydrophobic aliphatic compounds, hydrocarbons with chain lengths typically in the range C16 to C36.[6] The plant cuticle is one of a series of innovations, together with stomata, xylem and phloem and intercellular spaces in stem and later leaf mesophyll tissue, that plants evolved more than 450 million years ago during the transition between life in water and life on land.[7] Together, these features enabled plant shoots exploring aerial environments to conserve water by internalising the gas exchange surfaces, enclosing them in a waterproof membrane and providing a variable-aperture control mechanism, the stomatal guard cells, which regulate the rates of transpiration and CO2 exchange. In addition to its function as a permeability barrier for water and other molecules, the micro and nano-structure of the cuticle confer specialised surface properties that prevent contamination of plant tissues with external water, dirt and microorganisms. Many plants, such as the leaves of the sacred lotus (Nelumbo nucifera) exhibit ultra-hydrophobic and self-cleaning properties that have been described by Barthlott and Neinhuis (1997).[8] The lotus effect has potential uses in biomimetic technical materials. "The waxy sheet of cuticle also functions in defense, forming a physical barrier that resists penetration by virus particles, bacterial cells, and the spores or growing filaments of fungi". [9]
https://www.wikidoc.org/index.php/Plant_cuticle
3c9cd4c08f1576b1896bfa808bb470dd6de4770c
wikidoc
Platensimycin
Platensimycin Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. Platensimycin is a member of a previously unknown class of antibiotics, which acts by blocking enzymes involved in the of the condensation steps in fatty acid biosynthesis, which bacteria need to biosynthesise cell membranes (β-ketoacyl-(acyl-carrier-protein (ACP)) synthase I/II (FabF/FabB). Other enzymes in this pathway have similarly been proven antibiotic targets for example FabI, the enoyl-ACP (acyl carrier protein) reductase, that is inhibited by isoniazid and related compounds and the antiseptic agent triclosan. It is an experimental new drug in preclinical trials in an effort to combat MRSA in a mouse model. The newly discovered natural product inhibitor was first isolated from a strain of Streptomyces platensis by the Merck group. Recently, a first total synthesis of racemic platensimycin has been published. Its structure consists of a 3-amino-2,4-dihydroxybenzoic acid polar part linked through an amine bond to a lipophilic pentacyclicketolide.
Platensimycin Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us [1] to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. Platensimycin is a member of a previously unknown class of antibiotics, which acts by blocking enzymes involved in the of the condensation steps in fatty acid biosynthesis,[1] which bacteria need to biosynthesise cell membranes (β-ketoacyl-(acyl-carrier-protein (ACP)) synthase I/II (FabF/FabB). Other enzymes in this pathway have similarly been proven antibiotic targets for example FabI, the enoyl-ACP (acyl carrier protein) reductase, that is inhibited by isoniazid and related compounds and the antiseptic agent triclosan.[2] It is an experimental new drug in preclinical trials in an effort to combat MRSA in a mouse model.[3] The newly discovered natural product inhibitor was first isolated from a strain of Streptomyces platensis by the Merck group.[3] Recently, a first total synthesis of racemic platensimycin has been published.[4] Its structure consists of a 3-amino-2,4-dihydroxybenzoic acid polar part linked through an amine bond to a lipophilic pentacyclicketolide.[4]
https://www.wikidoc.org/index.php/Platensimycin
a4a4e17e5cfec00f47b98509fb43362af41ee22a
wikidoc
Point process
Point process In mathematics, a point process is a random element whose values are "point patterns" on a set S. While in the exact mathematical definition a point pattern is specified as a locally finite counting measure, it is sufficient for more applied purposes to think of a point pattern as a countable subset of S that has no limit points. Point processes are well studied objects in probability theory and a powerful tool in statistics for modeling and analyzing spatial data, which is of interest in such diverse disciplines as forestry, plant ecology, epidemiology, geography, seismology, materials science, astronomy, and others. Point processes on the real line form an important special case that is particularly amenable to study, because the different points are ordered in a natural way, and the whole point process can be described completely by the (random) intervals between the points. These point processes are frequently used as models for random events in time, such as the arrival of customers in a queue (queueing theory), of impulses in a neuron (computational neuroscience), or of particles in a Geiger counter. # General point process theory ## Definition Let S be locally compact second countable Hausdorff space equipped with its Borel σ-algebra B. Write \mathfrak{N} for the set of locally finite counting measures on S and \mathcal{N} for the smallest σ-algebra on \mathfrak{N} that renders all the point counts for relatively compact sets B in B measurable. A point process on S is a measurable map from a probability space (\Omega, \mathcal F, P) to the measurable space (\mathfrak{N},\mathcal{N}). By this definition, a point process is a special case of a random measure. The most common example for the state space S is the Euclidean space Rn or a subset thereof, where a particularly interesting special case is given by the real half-line [0,∞). However, point processes are not limited to these examples and may among other things also be used if the points are themselves compact subsets of Rn, in which case ξ is usually referred to as a particle process. It has been noted that the term point process is not a very good one if S is not a subset of the real line, as it might suggests that ξ is a stochastic process. However, the term is well established and uncontested even in the general case. ## Representation Every point process ξ can be represented as where \delta denotes the Dirac measure, N is a integer-valued random variable and X_i are random elements of S. ## Expectation measure The expectation measure Eξ of a point process ξ is a measure on S that assigns to every Borel subset B of S the expected number of points of ξ in B. That is, # Point processes in spatial statistics The analysis of point pattern data in a compact subset S of Rn is a major object of study within spatial statistics. Such data appear in a broad range of disciplines, amongst which are - forestry and plant ecology (positions of trees or plants in general) - epidemiology (home locations of infected patients) - zoology (burrows or nests of animals) - geography (positions of human settlements, towns or cities) - seismology (epicenters of earthquakes) - materials science (positions of defects in industrial materials) - astronomy (locations of stars or galaxies) The need to use point processes to model these kinds of data lies in their inherent spatial structure. Accordingly, a first question of interest is often whether the given data exhibit complete spatial randomness (i.e. are a realization of a spatial Poisson process) as opposed to exhibiting either spatial aggregation or spatial inhibition. In contrast, many datasets considered in classical multivariate statistics consist of indepently generated datapoints that may be governed by one or several covariates (typically non-spatial). # Point processes on the real half-line Historically the first point processes that were studied had the real half line R+ = , in which the points represented events in time, such as calls to a telephone exchange. Point processes on R+ are typically described by giving the sequence of their (random) inter-event times (T1, T2,...), from which the actual sequence (X1, X2,...) of event times can be obtained as If the inter-event times are independent and identically distributed, the point process obtained is called a renewal process. ## Conditional intensity function The conditional intensity function of a point process on the real half-line is a function λ(t|Ht) defined as \lambda(t| H_{t})=\lim_{\Delta t\to 0}\frac{1}{\Delta t}{P}(\mbox{One event occurs in the time-interval}\,\,|\, H_t) , where Ht denotes the history of event times preceding time t.
Point process In mathematics, a point process is a random element whose values are "point patterns" on a set S. While in the exact mathematical definition a point pattern is specified as a locally finite counting measure, it is sufficient for more applied purposes to think of a point pattern as a countable subset of S that has no limit points. Point processes are well studied objects in probability theory[1][2] and a powerful tool in statistics for modeling and analyzing spatial data,[3][4] which is of interest in such diverse disciplines as forestry, plant ecology, epidemiology, geography, seismology, materials science, astronomy, and others. Point processes on the real line form an important special case that is particularly amenable to study,[5] because the different points are ordered in a natural way, and the whole point process can be described completely by the (random) intervals between the points. These point processes are frequently used as models for random events in time, such as the arrival of customers in a queue (queueing theory), of impulses in a neuron (computational neuroscience), or of particles in a Geiger counter. # General point process theory ## Definition Let S be locally compact second countable Hausdorff space equipped with its Borel σ-algebra B. Write <math>\mathfrak{N}</math> for the set of locally finite counting measures on S and <math>\mathcal{N}</math> for the smallest σ-algebra on <math>\mathfrak{N}</math> that renders all the point counts for relatively compact sets B in B measurable. A point process on S is a measurable map from a probability space <math>(\Omega, \mathcal F, P)</math> to the measurable space <math>(\mathfrak{N},\mathcal{N})</math>. By this definition, a point process is a special case of a random measure. The most common example for the state space S is the Euclidean space Rn or a subset thereof, where a particularly interesting special case is given by the real half-line [0,∞). However, point processes are not limited to these examples and may among other things also be used if the points are themselves compact subsets of Rn, in which case ξ is usually referred to as a particle process. It has been noted[citation needed] that the term point process is not a very good one if S is not a subset of the real line, as it might suggests that ξ is a stochastic process. However, the term is well established and uncontested even in the general case. ## Representation Every point process ξ can be represented as where <math>\delta</math> denotes the Dirac measure, N is a integer-valued random variable and <math>X_i</math> are random elements of S. ## Expectation measure The expectation measure Eξ of a point process ξ is a measure on S that assigns to every Borel subset B of S the expected number of points of ξ in B. That is, # Point processes in spatial statistics The analysis of point pattern data in a compact subset S of Rn is a major object of study within spatial statistics. Such data appear in a broad range of disciplines[6], amongst which are - forestry and plant ecology (positions of trees or plants in general) - epidemiology (home locations of infected patients) - zoology (burrows or nests of animals) - geography (positions of human settlements, towns or cities) - seismology (epicenters of earthquakes) - materials science (positions of defects in industrial materials) - astronomy (locations of stars or galaxies) The need to use point processes to model these kinds of data lies in their inherent spatial structure. Accordingly, a first question of interest is often whether the given data exhibit complete spatial randomness (i.e. are a realization of a spatial Poisson process) as opposed to exhibiting either spatial aggregation or spatial inhibition. In contrast, many datasets considered in classical multivariate statistics consist of indepently generated datapoints that may be governed by one or several covariates (typically non-spatial). # Point processes on the real half-line Historically the first point processes that were studied had the real half line R+ = [0,∞) as their state space, which in this context is usually interpreted as time. These studies were motivated by the wish to model telecommunication systems[7], in which the points represented events in time, such as calls to a telephone exchange. Point processes on R+ are typically described by giving the sequence of their (random) inter-event times (T1, T2,...), from which the actual sequence (X1, X2,...) of event times can be obtained as If the inter-event times are independent and identically distributed, the point process obtained is called a renewal process. ## Conditional intensity function The conditional intensity function of a point process on the real half-line is a function λ(t|Ht) defined as \lambda(t| H_{t})=\lim_{\Delta t\to 0}\frac{1}{\Delta t}{P}(\mbox{One event occurs in the time-interval}\,[t,t+\Delta t]\,|\, H_t) ,</math> where Ht denotes the history of event times preceding time t.
https://www.wikidoc.org/index.php/Point_process
793bdb20e15e487b3295d1506bd95284579f0e44
wikidoc
Poland Spring
Poland Spring Poland Spring is a brand of bottled water manufactured by a subsidiary of Nestlé. It was founded in 1845 by Hiram Ricker. Contrary to popular belief, Poland Spring water does not all come from the town of Poland, Maine. Poland Spring water is derived from multiple sources in the state of Maine, including Poland Spring in Poland, Maine, Clear Spring in Hollis, Evergreen Spring in Fryeburg, Maine, Spruce Spring in Pierce Pond Township, Maine, Garden Spring in Poland, Maine and White Cedar Spring in Dallas Plantation, Maine. It is the top-selling spring water brand in America. # Origins The brand has its origins in late nineteenth century. Jabez Ricker had bought land in 1794 and two days later travelers knocked on the door asking for breakfast. Repeated requests by other travelers led him to open an inn known as the Mansion House in 1797. In 1844, Jabez's grandson , Hiram Ricker, drank a lot of the spring water and became convinced that it had cured him of chronic dyspepsia. The inn had grown to a resort, and his discussions with guests led them to also praise the drinking water. In this period, it was quite fashionable to "take the waters" for almost all illnesses, causing an uptick in business. The Rickers soon began bottling the water. The inn grew into a significant resort in the late nineteenth and early twentieth century, but the Ricker family lost control of the company during the 1930s. A resort is still operated on the site. # Water sales In 1901 Maine's Bureau of Industrial and Labor Statistics listed eighty-one existing mineral springs. Twenty-three were used for commercial bottling, with total sales of $400,000. $200,000 of these sales were by Poland Spring. Today Poland Spring sells the majority of its water in portable 8, 12, and 20 oz bottles; 500 mL, 700ml, 1L, and 1.5L bottles, but also carries larger 5 gallon bottles usable in office or in home water dispensers. Smaller 1 gallon and 2.5 gallon bottles are also available for sale in most supermarkets, and for home delivery in the Northeastern United States. Other less popular varieties of Poland Spring include sparkling, lemon, lime, and distilled. All Poland Spring products are sold in plastic bottles, for both safety and economical reasons. They are also the producers of the Aquapod line of products. In the Summer of 2005, Poland Spring changed the color of its 1 gallon bottle cap from dark green to clear and removed the safety seal in favor of a stronger twist off mechanism. The reason for the color change was to reduce the risk of taste complaints while saving money on materials. Poland Spring has since decided to change their bottles to a lighter, less wasteful style. The new style will debut in September of 2007. # Controversy Several towns in Maine have objected to the business practices of Poland Spring and its parent company Nestlé. In some towns, such as Fryeburg, Maine Poland Spring actually buys the water (110 million gallons of water from Fryeburg a year) from another company, the Fryeburg Water Co., and ships it to the Poland Spring bottling plant in Poland Spring. However, Fryeburg Water Co. also sells water to the town of Fryeburg. The town of Fryeburg began to question the amount of water the company was selling to Poland Spring. In 2004, the town's water stopped temporarily because of a pump failure, but Poland Spring's operations were able to continue. The group H₂O for ME wants to create a tax on water drawn for commercial purposes, however, Poland Spring said the tax would force the company into bankruptcy. State congressman Jim Wilfong proposed a 20 cent per gallon tax be allowed to be voted on in a referendum, but the measure was defeated. He also believes that laws should be rearranged to place limits on the amount of groundwater landowners can pump out of their land.
Poland Spring Poland Spring is a brand of bottled water manufactured by a subsidiary of Nestlé.[1] It was founded in 1845 by Hiram Ricker. Contrary to popular belief, Poland Spring water does not all come from the town of Poland, Maine. Poland Spring water is derived from multiple sources in the state of Maine, including Poland Spring in Poland, Maine, Clear Spring in Hollis, Evergreen Spring in Fryeburg, Maine, Spruce Spring in Pierce Pond Township, Maine, Garden Spring in Poland, Maine and White Cedar Spring in Dallas Plantation, Maine. It is the top-selling spring water brand in America.[1] # Origins The brand has its origins in late nineteenth century. Jabez Ricker had bought land in 1794 and two days later travelers knocked on the door asking for breakfast. Repeated requests by other travelers led him to open an inn known as the Mansion House in 1797. In 1844, Jabez's grandson , Hiram Ricker, drank a lot of the spring water and became convinced that it had cured him of chronic dyspepsia. The inn had grown to a resort, and his discussions with guests led them to also praise the drinking water. In this period, it was quite fashionable to "take the waters" for almost all illnesses, causing an uptick in business. The Rickers soon began bottling the water. The inn grew into a significant resort in the late nineteenth and early twentieth century, but the Ricker family lost control of the company during the 1930s. A resort is still operated on the site. [2] # Water sales In 1901 Maine's Bureau of Industrial and Labor Statistics listed eighty-one existing mineral springs. Twenty-three were used for commercial bottling, with total sales of $400,000. $200,000 of these sales were by Poland Spring. Today Poland Spring sells the majority of its water in portable 8, 12, and 20 oz bottles; 500 mL, 700ml, 1L, and 1.5L bottles, but also carries larger 5 gallon bottles usable in office or in home water dispensers. Smaller 1 gallon and 2.5 gallon bottles are also available for sale in most supermarkets, and for home delivery in the Northeastern United States. Other less popular varieties of Poland Spring include sparkling, lemon, lime, and distilled. All Poland Spring products are sold in plastic bottles, for both safety and economical reasons. They are also the producers of the Aquapod line of products. In the Summer of 2005, Poland Spring changed the color of its 1 gallon bottle cap from dark green to clear and removed the safety seal in favor of a stronger twist off mechanism. The reason for the color change was to reduce the risk of taste complaints while saving money on materials. Poland Spring has since decided to change their bottles to a lighter, less wasteful style. The new style will debut in September of 2007.[3] # Controversy Several towns in Maine have objected to the business practices of Poland Spring and its parent company Nestlé. In some towns, such as Fryeburg, Maine Poland Spring actually buys the water (110 million gallons of water from Fryeburg a year) from another company, the Fryeburg Water Co., and ships it to the Poland Spring bottling plant in Poland Spring.[4] However, Fryeburg Water Co. also sells water to the town of Fryeburg. The town of Fryeburg began to question the amount of water the company was selling to Poland Spring. In 2004, the town's water stopped temporarily because of a pump failure, but Poland Spring's operations were able to continue.[1] The group H₂O for ME wants to create a tax on water drawn for commercial purposes, however, Poland Spring said the tax would force the company into bankruptcy.[5] State congressman Jim Wilfong proposed a 20 cent per gallon tax be allowed to be voted on in a referendum, but the measure was defeated. He also believes that laws should be rearranged to place limits on the amount of groundwater landowners can pump out of their land.[1]
https://www.wikidoc.org/index.php/Poland_Spring
3d9edf78b167d9df860f4a49ad24a75782827bed
wikidoc
Pollicization
Pollicization Please Join in Editing This Page and Apply to be an Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. # Overview Pollicization is a plastic surgery technique in which a thumb is created from an existing finger. Typically this consists of surgically migrating the index finger to the position of the thumb in patients who are either born without a functional thumb (most common) or in patients who have lost their thumb traumatically and are not amenable to other preferred methods of thumb reconstruction such as toe-to-hand transfers. During pollicization the index finger metacarpal bone is cut and the finger is rotated approximately 120 to 160 degrees and replaced at the base of the hand at the usual position of the thumb. The arteries and veins are left attached. If nerves and tendons are available from the previous thumb these are attached to provide sensation and movement to the new thumb ("neopollux"). If the thumb is congenitally absent other tendons from the migrated index finger may be shortened and rerouted to provide good movement. The presence of an opposable thumb is considered important for manipulation of most objects in the physical world. Children born without thumbs often adapt to the condition very well with few limitations therefore the decision to proceed with pollicization lies with the child's parents with the recommendation of their surgeon. Persons who have grown to adulthood with functional thumbs and then lost a thumb find it highly beneficial to have a thumb reconstruction, not only from a functional but from a mental and emotional standpoint. Another case for pollicization is where someone is born with a hand which has five fingers, but the radialmost finger is an ordinary finger and not a thumb.
Pollicization Editors-In-Chief: Martin I. Newman, M.D., FACS, Cleveland Clinic Florida, [1]; Michel C. Samson, M.D., FRCSC, FACS [2] Please Join in Editing This Page and Apply to be an Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us [3] to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. # Overview Pollicization is a plastic surgery technique in which a thumb is created from an existing finger. Typically this consists of surgically migrating the index finger to the position of the thumb in patients who are either born without a functional thumb (most common) or in patients who have lost their thumb traumatically and are not amenable to other preferred methods of thumb reconstruction such as toe-to-hand transfers. During pollicization the index finger metacarpal bone is cut and the finger is rotated approximately 120 to 160 degrees and replaced at the base of the hand at the usual position of the thumb. The arteries and veins are left attached. If nerves and tendons are available from the previous thumb these are attached to provide sensation and movement to the new thumb ("neopollux"). If the thumb is congenitally absent other tendons from the migrated index finger may be shortened and rerouted to provide good movement. The presence of an opposable thumb is considered important for manipulation of most objects in the physical world. Children born without thumbs often adapt to the condition very well with few limitations therefore the decision to proceed with pollicization lies with the child's parents with the recommendation of their surgeon. Persons who have grown to adulthood with functional thumbs and then lost a thumb find it highly beneficial to have a thumb reconstruction, not only from a functional but from a mental and emotional standpoint. Another case for pollicization is where someone is born with a hand which has five fingers, but the radialmost finger is an ordinary finger and not a thumb.[4]
https://www.wikidoc.org/index.php/Pollicization
83d9874d2cb6f0ba894bd7efe773f2b836f84b38
wikidoc
Polycarbophil
Polycarbophil # Disclaimer WikiDoc MAKES NO GUARANTEE OF VALIDITY. WikiDoc is not a professional health care provider, nor is it a suitable replacement for a licensed healthcare provider. WikiDoc is intended to be an educational tool, not a tool for any form of healthcare delivery. The educational content on WikiDoc drug pages is based upon the FDA package insert, National Library of Medicine content and practice guidelines / consensus statements. WikiDoc does not promote the administration of any medication or device that is not consistent with its labeling. Please read our full disclaimer here. NOTE: Most over the counter (OTC) are not reviewed and approved by the FDA. However, they may be marketed if they comply with applicable regulations and policies. FDA has not evaluated whether this product complies. # Overview Polycarbophil is a laxative that is FDA approved for the treatment of relieves occasional constipation to help restore and maintain regularity this product generally produces bowel movement in 12 to 72 hours. Common adverse reactions include epigastric fullness, flatulence. # Adult Indications and Dosage ## FDA-Labeled Indications and Dosage (Adult) - Relieves occasional constipation to help restore and maintain regularity this product generally produces bowel movement in 12 to 72 hours take each dose of this product with at least 8 ounces (a full glass) of water or other fluid. Taking this product without enough liquid may cause choking. See CHOKING warning. - FiberCon works naturally so continued use for one to three days is normally required to provide full benefit. Dosage may vary according to diet, exercise, previous laxative use or severity of constipation. ## Off-Label Use and Dosage (Adult) ### Guideline-Supported Use There is limited information regarding Off-Label Guideline-Supported Use of Polycarbophil in adult patients. ### Non–Guideline-Supported Use There is limited information regarding Off-Label Non–Guideline-Supported Use of Polycarbophil in adult patients. # Pediatric Indications and Dosage ## FDA-Labeled Indications and Dosage (Pediatric) There is limited information regarding FDA-Labeled Use of Polycarbophil in pediatric patients. ## Off-Label Use and Dosage (Pediatric) ### Guideline-Supported Use There is limited information regarding Off-Label Guideline-Supported Use of Polycarbophil in pediatric patients. ### Non–Guideline-Supported Use There is limited information regarding Off-Label Non–Guideline-Supported Use of Polycarbophil in pediatric patients. # Contraindications There is limited information regarding Polycarbophil Contraindications in the drug label. # Warnings - Choking: - Taking this product without adequate fluid may cause it to swell and block your throat or esophagus and may cause choking. Do not take this product if you have difficulty in swallowing. If you experience chest pain, vomiting, or difficulty in swallowing or breathing after taking this product, seek immediate medical attention. - Ask a doctor before use if you have - Abdominal pain, nausea, or vomiting - A sudden change in bowel habits that persists over a period of 2 weeks - Ask a doctor or pharmacist before use if you are - Taking any other drug. Take this product 2 or more hours before or after other drugs. All laxatives may affect how other drugs work. - When using this product - Do not use for more than 7 days unless directed by a doctor - Do not take more than 8 caplets in a 24 hour period unless directed by a doctor - Stop use and ask a doctor if - Rectal bleeding occurs or if you fail to have a bowel movement after use of this or any other laxative. These could be signs of a serious condition. - Keep out of reach of children. - In case of overdose, get medical help or contact a Poison Control Center right away. # Adverse Reactions ## Clinical Trials Experience There is limited information regarding Clinical Trial Experience of Polycarbophil in the drug label. ## Postmarketing Experience There is limited information regarding Postmarketing Experience of Polycarbophil in the drug label. # Drug Interactions There is limited information regarding Polycarbophil Drug Interactions in the drug label. # Use in Specific Populations ### Pregnancy Pregnancy Category (FDA): - Pregnancy Category Pregnancy Category (AUS): - Australian Drug Evaluation Committee (ADEC) Pregnancy Category There is no Australian Drug Evaluation Committee (ADEC) guidance on usage of Polycarbophil in women who are pregnant. ### Labor and Delivery There is no FDA guidance on use of Polycarbophil during labor and delivery. ### Nursing Mothers There is no FDA guidance on the use of Polycarbophil with respect to nursing mothers. ### Pediatric Use There is no FDA guidance on the use of Polycarbophil with respect to pediatric patients. ### Geriatic Use There is no FDA guidance on the use of Polycarbophil with respect to geriatric patients. ### Gender There is no FDA guidance on the use of Polycarbophil with respect to specific gender populations. ### Race There is no FDA guidance on the use of Polycarbophil with respect to specific racial populations. ### Renal Impairment There is no FDA guidance on the use of Polycarbophil in patients with renal impairment. ### Hepatic Impairment There is no FDA guidance on the use of Polycarbophil in patients with hepatic impairment. ### Females of Reproductive Potential and Males There is no FDA guidance on the use of Polycarbophil in women of reproductive potentials and males. ### Immunocompromised Patients There is no FDA guidance one the use of Polycarbophil in patients who are immunocompromised. # Administration and Monitoring ### Administration - Oral - Intravenous ### Monitoring There is limited information regarding Monitoring of Polycarbophil in the drug label. # IV Compatibility There is limited information regarding IV Compatibility of Polycarbophil in the drug label. # Overdosage There is limited information regarding Polycarbophil overdosage. If you suspect drug poisoning or overdose, please contact the National Poison Help hotline (1-800-222-1222) immediately. # Pharmacology ## Mechanism of Action There is limited information regarding Polycarbophil Mechanism of Action in the drug label. ## Structure There is limited information regarding Polycarbophil Structure in the drug label. ## Pharmacodynamics There is limited information regarding Pharmacodynamics of Polycarbophil in the drug label. ## Pharmacokinetics There is limited information regarding Pharmacokinetics of Polycarbophil in the drug label. ## Nonclinical Toxicology There is limited information regarding Nonclinical Toxicology of Polycarbophil in the drug label. # Clinical Studies There is limited information regarding Clinical Studies of Polycarbophil in the drug label. # How Supplied There is limited information regarding Polycarbophil How Supplied in the drug label. ## Storage There is limited information regarding Polycarbophil Storage in the drug label. # Images ## Drug Images ## Package and Label Display Panel # Patient Counseling Information There is limited information regarding Patient Counseling Information of Polycarbophil in the drug label. # Precautions with Alcohol - Alcohol-Polycarbophil interaction has not been established. Talk to your doctor about the effects of taking alcohol with this medication. # Brand Names - FIBERCON ® # Look-Alike Drug Names There is limited information regarding Polycarbophil Look-Alike Drug Names in the drug label. # Drug Shortage Status # Price
Polycarbophil Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]; Associate Editor(s)-in-Chief: Ammu Susheela, M.D. [2] # Disclaimer WikiDoc MAKES NO GUARANTEE OF VALIDITY. WikiDoc is not a professional health care provider, nor is it a suitable replacement for a licensed healthcare provider. WikiDoc is intended to be an educational tool, not a tool for any form of healthcare delivery. The educational content on WikiDoc drug pages is based upon the FDA package insert, National Library of Medicine content and practice guidelines / consensus statements. WikiDoc does not promote the administration of any medication or device that is not consistent with its labeling. Please read our full disclaimer here. NOTE: Most over the counter (OTC) are not reviewed and approved by the FDA. However, they may be marketed if they comply with applicable regulations and policies. FDA has not evaluated whether this product complies. # Overview Polycarbophil is a laxative that is FDA approved for the treatment of relieves occasional constipation to help restore and maintain regularity this product generally produces bowel movement in 12 to 72 hours. Common adverse reactions include epigastric fullness, flatulence. # Adult Indications and Dosage ## FDA-Labeled Indications and Dosage (Adult) - Relieves occasional constipation to help restore and maintain regularity this product generally produces bowel movement in 12 to 72 hours take each dose of this product with at least 8 ounces (a full glass) of water or other fluid. Taking this product without enough liquid may cause choking. See CHOKING warning. - FiberCon works naturally so continued use for one to three days is normally required to provide full benefit. Dosage may vary according to diet, exercise, previous laxative use or severity of constipation. ## Off-Label Use and Dosage (Adult) ### Guideline-Supported Use There is limited information regarding Off-Label Guideline-Supported Use of Polycarbophil in adult patients. ### Non–Guideline-Supported Use There is limited information regarding Off-Label Non–Guideline-Supported Use of Polycarbophil in adult patients. # Pediatric Indications and Dosage ## FDA-Labeled Indications and Dosage (Pediatric) There is limited information regarding FDA-Labeled Use of Polycarbophil in pediatric patients. ## Off-Label Use and Dosage (Pediatric) ### Guideline-Supported Use There is limited information regarding Off-Label Guideline-Supported Use of Polycarbophil in pediatric patients. ### Non–Guideline-Supported Use There is limited information regarding Off-Label Non–Guideline-Supported Use of Polycarbophil in pediatric patients. # Contraindications There is limited information regarding Polycarbophil Contraindications in the drug label. # Warnings - Choking: - Taking this product without adequate fluid may cause it to swell and block your throat or esophagus and may cause choking. Do not take this product if you have difficulty in swallowing. If you experience chest pain, vomiting, or difficulty in swallowing or breathing after taking this product, seek immediate medical attention. - Ask a doctor before use if you have - Abdominal pain, nausea, or vomiting - A sudden change in bowel habits that persists over a period of 2 weeks - Ask a doctor or pharmacist before use if you are - Taking any other drug. Take this product 2 or more hours before or after other drugs. All laxatives may affect how other drugs work. - When using this product - Do not use for more than 7 days unless directed by a doctor - Do not take more than 8 caplets in a 24 hour period unless directed by a doctor - Stop use and ask a doctor if - Rectal bleeding occurs or if you fail to have a bowel movement after use of this or any other laxative. These could be signs of a serious condition. - Keep out of reach of children. - In case of overdose, get medical help or contact a Poison Control Center right away. # Adverse Reactions ## Clinical Trials Experience There is limited information regarding Clinical Trial Experience of Polycarbophil in the drug label. ## Postmarketing Experience There is limited information regarding Postmarketing Experience of Polycarbophil in the drug label. # Drug Interactions There is limited information regarding Polycarbophil Drug Interactions in the drug label. # Use in Specific Populations ### Pregnancy Pregnancy Category (FDA): - Pregnancy Category Pregnancy Category (AUS): - Australian Drug Evaluation Committee (ADEC) Pregnancy Category There is no Australian Drug Evaluation Committee (ADEC) guidance on usage of Polycarbophil in women who are pregnant. ### Labor and Delivery There is no FDA guidance on use of Polycarbophil during labor and delivery. ### Nursing Mothers There is no FDA guidance on the use of Polycarbophil with respect to nursing mothers. ### Pediatric Use There is no FDA guidance on the use of Polycarbophil with respect to pediatric patients. ### Geriatic Use There is no FDA guidance on the use of Polycarbophil with respect to geriatric patients. ### Gender There is no FDA guidance on the use of Polycarbophil with respect to specific gender populations. ### Race There is no FDA guidance on the use of Polycarbophil with respect to specific racial populations. ### Renal Impairment There is no FDA guidance on the use of Polycarbophil in patients with renal impairment. ### Hepatic Impairment There is no FDA guidance on the use of Polycarbophil in patients with hepatic impairment. ### Females of Reproductive Potential and Males There is no FDA guidance on the use of Polycarbophil in women of reproductive potentials and males. ### Immunocompromised Patients There is no FDA guidance one the use of Polycarbophil in patients who are immunocompromised. # Administration and Monitoring ### Administration - Oral - Intravenous ### Monitoring There is limited information regarding Monitoring of Polycarbophil in the drug label. # IV Compatibility There is limited information regarding IV Compatibility of Polycarbophil in the drug label. # Overdosage There is limited information regarding Polycarbophil overdosage. If you suspect drug poisoning or overdose, please contact the National Poison Help hotline (1-800-222-1222) immediately. # Pharmacology ## Mechanism of Action There is limited information regarding Polycarbophil Mechanism of Action in the drug label. ## Structure There is limited information regarding Polycarbophil Structure in the drug label. ## Pharmacodynamics There is limited information regarding Pharmacodynamics of Polycarbophil in the drug label. ## Pharmacokinetics There is limited information regarding Pharmacokinetics of Polycarbophil in the drug label. ## Nonclinical Toxicology There is limited information regarding Nonclinical Toxicology of Polycarbophil in the drug label. # Clinical Studies There is limited information regarding Clinical Studies of Polycarbophil in the drug label. # How Supplied There is limited information regarding Polycarbophil How Supplied in the drug label. ## Storage There is limited information regarding Polycarbophil Storage in the drug label. # Images ## Drug Images ## Package and Label Display Panel # Patient Counseling Information There is limited information regarding Patient Counseling Information of Polycarbophil in the drug label. # Precautions with Alcohol - Alcohol-Polycarbophil interaction has not been established. Talk to your doctor about the effects of taking alcohol with this medication. # Brand Names - FIBERCON ®[1] # Look-Alike Drug Names There is limited information regarding Polycarbophil Look-Alike Drug Names in the drug label. # Drug Shortage Status # Price
https://www.wikidoc.org/index.php/Polycarbophil
0b1f60cddfa7fd8a87045891b99f37e8c6b4de25
wikidoc
Polymer blend
Polymer blend A polymer blend, polymer alloy, or polymer mixture is a member of a class of materials analogous to metal alloys, in which two or more polymers are blended together to create a new material with different physical properties. Polymer blends can be broadly divided into three categories: miscible, partially miscible and immiscible blends. The latter is by far the most populous group. # Notes - ↑ Gert R. Strobl (1996). The Physics of Polymers Concepts for Understanding Their Structures and Behavior. Springer-Verlag. ISBN 3-540-60768-4..mw-parser-output cite.citation{font-style:inherit}.mw-parser-output q{quotes:"\"""\"""'""'"}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-lock-free a{background:url("")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration a{background:url("")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-subscription a{background:url("")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{display:none;font-size:100%}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em} Section 3.2 Polymer Mixtures
Polymer blend A polymer blend, polymer alloy, or polymer mixture is a member of a class of materials analogous to metal alloys, in which two or more polymers are blended together to create a new material with different physical properties. [1] Polymer blends can be broadly divided into three categories: miscible, partially miscible and immiscible blends. The latter is by far the most populous group. # Notes - ↑ Gert R. Strobl (1996). The Physics of Polymers Concepts for Understanding Their Structures and Behavior. Springer-Verlag. ISBN 3-540-60768-4..mw-parser-output cite.citation{font-style:inherit}.mw-parser-output q{quotes:"\"""\"""'""'"}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-lock-free a{background:url("https://upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration a{background:url("https://upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-subscription a{background:url("https://upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{display:none;font-size:100%}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em} Section 3.2 Polymer Mixtures
https://www.wikidoc.org/index.php/Polymer_blend
224ebf82b8b3a32a612bb83a9b98e39b39d9561a
wikidoc
Polyphosphate
Polyphosphate Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. # Overview Polyphosphates are anionic phosphate polymers linked between hydroxyl groups and hydrogen atoms. The polymerization that takes place is known as a condensation reaction. Phosphate chemical bonds are typically high-energy covalent bonds, which means that energy is available upon breaking such bonds in spontaneous or enzyme catalyzed reactions. Adenosine triphosphate (ATP) is an example of a phosphate trimer, a polymer with three phosphate groups. # Examples of Polyphosphates ## DNA DNA is built on a type of phosphate/sugar copolymer. Essentially, it consists of alternating deoxyribose and phosphate groups linked together to form a chain or backbone. Nucleotide bases attach to the sugar and form hydrogen bonds with a bases on a complementary chain. The entire system consists of two long chains which coil up in a helix-like structure. RNA is similar, the two differences being, the sugar ribose being used in the phosphate/sugar backbone rather than deoxyribose and uracil being used instead of thymine as the aromatic base. ## Sodium Tripolyphosphate Sodium tripolyphosphate, (Na5P3O10), has been used widely as a constituent of laundry detergents, acting as a water softener in hard water regions and improving detergent performance. In recent years, concern has grown that this results in substantial amounts of phosphates entering the sewage system and thence to watercourses, resulting in eutrophication. This has led to the amounts of polyphosphates in detergents being legally controlled in a number of countries (e.g., Germany, Italy, Austria). ## High-polymeric Inorganic Polyphosphates High-polymeric inorganic polyphosphates were found in living organisms by L. Liberman in 1890. These compounds are linear polymers containing a few to several hundred residues of orthophosphate linked by energy-rich phosphoanhydride bonds. Previously, it was considered either as “molecular fossil” or as only a phosphorus and energy source providing the survival of microorganisms under extreme conditions. These compounds now known to also have regulatory roles and to occur in representatives of all kingdoms of living organisms, participating in metabolic correction and control on both genetic and enzymatic levels. Polyphosphate is directly involved in the switching-over of the genetic program characteristic of the logarithmic growth stage of bacteria to the program of cell survival under stationary conditions, “a life in the slow line”. They participate in many regulatory mechanisms occurring in bacteria: - They participate in the induction of rpoS, an RNA-polymerase subunit which is responsible for the expression of a large group of genes involved in adjustments to the stationary growth phase and many stressful agents. - They are important for cell motility, biofilms formation and virulence. - Polyphosphates and exopolyphosphatases participate in the regulation of the levels of the stringent response factor, guanosine 5'-diphosphate 3'-diphosphate (ppGpp), a second messenger in bacterial cells. - Polyphosphates participate in the formation of channels across the living cell membranes. The above channels formed by polyphosphate and poly-b-hydroxybutyrate with Ca2+ are involved in the transport processes in a variety of organisms. - An important function of polyphosphate in microorganisms—prokaryotes and the lower eukaryotes—is to handle changing environmental conditions by providing phosphate and energy reserves. Polyphosphates are present in animal cells, and there are many data on its participation in the regulatory processes during development and cellular proliferation and differentiation—especially in bone tissues and brain.
Polyphosphate Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us [1] to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. # Overview Polyphosphates are anionic phosphate polymers linked between hydroxyl groups and hydrogen atoms. The polymerization that takes place is known as a condensation reaction. Phosphate chemical bonds are typically high-energy covalent bonds, which means that energy is available upon breaking such bonds in spontaneous or enzyme catalyzed reactions. Adenosine triphosphate (ATP) is an example of a phosphate trimer, a polymer with three phosphate groups. # Examples of Polyphosphates ## DNA DNA is built on a type of phosphate/sugar copolymer. Essentially, it consists of alternating deoxyribose and phosphate groups linked together to form a chain or backbone. Nucleotide bases attach to the sugar and form hydrogen bonds with a bases on a complementary chain. The entire system consists of two long chains which coil up in a helix-like structure. RNA is similar, the two differences being, the sugar ribose being used in the phosphate/sugar backbone rather than deoxyribose and uracil being used instead of thymine as the aromatic base. ## Sodium Tripolyphosphate Sodium tripolyphosphate, (Na5P3O10), has been used widely as a constituent of laundry detergents, acting as a water softener in hard water regions and improving detergent performance. In recent years, concern has grown that this results in substantial amounts of phosphates entering the sewage system and thence to watercourses, resulting in eutrophication. This has led to the amounts of polyphosphates in detergents being legally controlled in a number of countries (e.g., Germany, Italy, Austria). ## High-polymeric Inorganic Polyphosphates High-polymeric inorganic polyphosphates were found in living organisms by L. Liberman in 1890. These compounds are linear polymers containing a few to several hundred residues of orthophosphate linked by energy-rich phosphoanhydride bonds. Previously, it was considered either as “molecular fossil” or as only a phosphorus and energy source providing the survival of microorganisms under extreme conditions. These compounds now known to also have regulatory roles and to occur in representatives of all kingdoms of living organisms, participating in metabolic correction and control on both genetic and enzymatic levels. Polyphosphate is directly involved in the switching-over of the genetic program characteristic of the logarithmic growth stage of bacteria to the program of cell survival under stationary conditions, “a life in the slow line”. They participate in many regulatory mechanisms occurring in bacteria: - They participate in the induction of rpoS, an RNA-polymerase subunit which is responsible for the expression of a large group of genes involved in adjustments to the stationary growth phase and many stressful agents. - They are important for cell motility, biofilms formation and virulence. - Polyphosphates and exopolyphosphatases participate in the regulation of the levels of the stringent response factor, guanosine 5'-diphosphate 3'-diphosphate (ppGpp), a second messenger in bacterial cells. - Polyphosphates participate in the formation of channels across the living cell membranes. The above channels formed by polyphosphate and poly-b-hydroxybutyrate with Ca2+ are involved in the transport processes in a variety of organisms. - An important function of polyphosphate in microorganisms—prokaryotes and the lower eukaryotes—is to handle changing environmental conditions by providing phosphate and energy reserves. Polyphosphates are present in animal cells, and there are many data on its participation in the regulatory processes during development and cellular proliferation and differentiation—especially in bone tissues and brain. # External links - A high-conductance mode of a poly-3-hydroxybutyrate/calcium/polyphosphate channel isolated from competent Escherichia coli cells. - New aspects of inorganic polyphosphate metabolism and function. - Polyphosphate and phosphate pump. de:Polyphosphate hu:Polifoszfátok uk:Поліфосфат Template:WH Template:WS
https://www.wikidoc.org/index.php/Polyphosphate
6ba9ba588e74b716bdb14e27adcd75255c136ccf
wikidoc
Polysomnogram
Polysomnogram Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. Polysomnogram (PSG) is a multi-channel ("poly") recording ("gram") during sleep ("somno"). A doctor may order a polysomnogram because the patient has a complaint such as daytime fatigue or sleepiness that may be from interrupted sleep. Typically, doctors order a polysomnogram to diagnose or rule out obstructive sleep apnea. Although the PSG can be done during the day or night, the vast majority of sleep studies are done at night, when most people sleep. Shift workers can be accommodated in some labs by having the test at other times. For the standard test the patient comes to a sleep lab in the early evening, and over the next 1-2 hours is introduced to the setting and "wired up" so that multiple channels of data can be recorded when he/she falls asleep. The sleep lab may be in a hospital, a free-standing medical office, or in a hotel. A sleep technician is always in attendance and is responsible for attaching the electrodes to the patient and monitoring the patient during the study. # Uses A polysomnogram usually records: - 2 channels for the electroencephalogram, or EEG. The EEG is crucial for determining a) IF the patient is sleeping or not, and b) what stage of sleep the patient is in (see below for stages). EEG may be recorded from multiple areas over the head, but for most PSGs two areas are sufficient: the back (occipital channel) and top (central channel). - 1 channel to measure air flow - this is done using a thermistor or pressure probe that fits inside the nostrils - 1 channel for chin movements - this is a recording of the chin 'electroymogram' or EMG, of muscle movements about the chin area; see Electromyography - 1 channel for leg movements - this is a recording of the electromyogram or EMG for the legs (usually one channel for both legs, though some labs will separate them into 2 separate channels); see Electromyography - 2 channels for eye movments, or 'electro-oculogram' - eye movements are crucial for determining the stage of sleep known as Rapid Eye Movement or REM sleep. REM sleep is when most of our dreaming takes place. - 1 channel for EKG or electrocardiogram - records heart rate and rhythm - 1 channel for oxygen saturation - this is done with a pulse oximeter that fits over a finger tip or the ear lobe - 1 channel for chest wall movement - using a belt that wraps around the chest - 1 channel for abdominal wall movement - using another belt that wraps around the upper abdomen Thus, the typical polysomnogram has a minimum of 11 channels. (Note that this is different from the actual number of wires attached to the patient. For technical reasons, 2 wire attachments are actually required per individual recording channel in most cases.) The number of recorded channels can be more than 11 in certain situations. Some labs will measure air flow with both a thermistor and a pressure transducer (the latter considered more sensitive), so that the patient has 2 small probes in the nostrils, not one. Sometimes snoring will be recorded with a sound probe over the neck, though more commonly the sleep technician will just note snoring as "mild", "moderate" or "loud" or give a numerical estimate on "a scale of 1 to 10". Research labs and labs conducting special tests on selected patients (e.g., when nocturnal seizures are suspected) may also record additional data. Wires for each channel of recorded data lead from the patient and converge into a central box, which in turn is connected to a computer system for recording, storing and displaying all the data. During sleep the computer monitor can display multiple channels continuously. In addition, most labs have a small video camera in the room so the technician can observe the patient visually from an adjacent room. Despite all the attached wires and a new environment, most patients are able to sleep during the PSG. In fact, about the same number of patients state they slept 'as well or better' than at home, as state they slept not as well or poorly. During the study, the technician observes sleep activity by looking at the video monitor and the computer screen that displays all the data second by second. In most labs the test is completed and the patient is discharged home by 7 a.m. After the test is completed a 'scorer' (usually not the sleep technician) analyzes the data by reviewing the study in 30 second 'epochs', looking for the following information: - Onset of sleep from time the lights were turned off; this is called 'sleep latency' and normally is less than 20 minutes. (Note that determining 'sleep' and 'awake' is based solely on the EEG. Patients sometimes feel they were awake when the EEG shows they were sleeping.) - Sleep efficiency: the number of minutes of sleep divided by the number of minutes in bed. Normal is approximately 85 to 90% or higher. - Sleep stages; these are based on 3 sources of data coming from 5 channels: EEG (2 channels usually), EOG (2) and chin EMG (1). From this information each 30-second epoch is scored as 'awake' or one of 5 sleep stages: 1, 2, 3, 4 and REM or Rapid Eye Movement sleep. Stages 1-4 are together called non-REM sleep. Non-REM sleep is distinguished from REM sleep, which is altogether different. Within non-REM sleep, stages 3 and 4 are called "slow wave" sleep because of the relatively wide brain waves compared to other stages; another name for stages 3 and 4 is 'deep sleep'. By contrast, stage 1 and 2 are 'light sleep.'. The figures show Stage 4 sleep and REM sleep; each figure is a 30-second epoch from an overnight PSG. (The percentage of each sleep stage varies by age, with decreasing amounts of REM and deep sleep in older people. The majority of sleep at all ages (except infancy) is Stage 2. REM normally occupies about 20-25% of sleep time. Many factors besides age can affect both the amount and percentage of each sleep stage, including drugs (particularly anti-depressants and pain meds), alcohol taken before bed time, and sleep deprivation.) - Any breathing irregularities; mainly apneas and hypopneas. Apnea is a complete or near complete cessation of breathing for at least 10 seconds; hypopnea is a partial cessation of breathing for at least 10 seconds. - 'Arousals' are sudden shifts in brain wave activity. They may be caused by numerous factors, including breathing abnormalities, leg movements, environmental noises, etc. An abnormal number of arousals indicates 'interrupted sleep' and may explain a person's daytime symptoms of fatigue and/or sleepiness. - Cardiac rhythm abnormalities - Leg movements - Body position during sleep - Oxygen saturation during sleep Once scored, the test recording and the scoring data are sent to the sleep medicine physician for interpretation. Ideally, interpretation is done in conjunction with the medical history, a complete list of drugs the patient is taking, and any other relevant information that might impact the study such as napping done before the test. Once interpreted, the sleep physician writes a report which is sent to the referring physician, usually with specific recommendations based on the test results. # Example of summary report from overnight 'diagnostic' sleep study (PSG) Mr. J-----, age 41, 5’8” tall, 265 lbs., came to the sleep lab to rule out obstructive sleep apnea. He complains of some snoring and daytime sleepiness. His score on the Epworth Sleepiness Scale is elevated at 15 (out of possible 24 points), affirming excessive daytime sleepiness (normal is <10/24). This single-night diagnostic sleep study shows evidence for obstructive sleep apnea (OSA). For the full night his apnea+hypopnea index was elevated at 18.1 events/hr. (normal <5 events/hr; this is “moderate” OSA). While sleeping supine, his AHI was twice that, at 37.1 events/hr. He also had some oxygen desaturation; for 11% of sleep time his SaO2 was between 80% and 90%. Results of this study indicate Mr. J---- would benefit from CPAP. To this end, I recommend that he return to the lab for a CPAP titration study. (Interpreted by Dr. M.) # 'Split night' sleep studies The above report mentions CPAP as treatment for obstructive sleep apnea. CPAP is continuous positive airway pressure, and is delivered via a tight fitting mask to the patient's nose or nose & mouth (some masks cover one, some both). CPAP is typically prescribed after the diagnosis of OSA is made from a sleep study (i.e., after a PSG test). To determine the correct amount of pressure, the right mask size, and also to make sure the patient is tolerant of this therapy, a 'CPAP titration study' is recommended. This is the same as a 'PSG', but with the addition of the mask applied, so the technician can increase the airway pressure inside the mask as needed, until all (or most all) of the patient's airway obstructions are eliminated. The above report recommends Mr.J---- return for a CPAP titration study, which means return to the lab for a 2nd all night PSG (this one with the mask applied). Often, however, when a patient manifests OSA in the first 2 or 3 hours of the intial PSG, the technician will interrupt the study and apply the mask right then and there; the patient is literally woken up and fitted for a mask. The rest of the sleep study is then a 'CPAP titration.' When both the diagnostic PSG and a CPAP titration are done the same night, the entire study is called 'Split Night'. The advantages of the split night study are: 1) the patient only has to come to the lab once, so it is less disruptive than coming two different nights; 2) it is 'half as expensive' to whomever is paying for the study. The disadvantages of a split night study are 1) less time to make a diagnosis of OSA (Medicare requires a minimum of 2 hours of diagnosis time before the mask can be applied); and 2) less time to assure an adequate CPAP titration. If the titration is begun with only a few hours of sleep left, the remaining time may not assure a proper CPAP titration, and the patient may still have to return to the lab. Because of costs, more and more studies for 'sleep apnea' are attempted as split night when there is early evidence for OSA. Note that both types of study - with and without a CPAP mask - are still polysomnograms. When the CPAP mask is worn, however, the flow measurement lead in the patient's nose is removed, and a wire coming directly from the mask then measures air flow. # Example of summary report from a 'split night' sleep study (PSG) Mr. B---, age 38, 6 ft. tall, 348 lbs., came to the Hospital Sleep Lab to diagnose or rule out obstructive sleep apnea. This polysomnogram consisted of overnight recording of left and right EOG, submental EMG, left and right anterior EMG, central and occipital EEG, EKG, airflow measurement, respiratory effort and pulse oximetry. The test was done without supplemental oxygen. His latency to sleep onset was slightly prolonged at 28.5 minutes. Sleep efficiency was normal at 89.3% (413.5 minutes sleep time out of 463 minutes in bed). During the first 71 minutes of sleep Mr. B---- manifested 83 obstructive apneas, 3 central apneas, 1 mixed apnea and 28 hypopneas, for an elevated apnea+hyponea index (AHI) of 97 events/hr. (= “severe” OSA). His lowest SaO2 during the pre-CPAP period was 72%. CPAP was then applied at 5 cm H2O, and sequentially titrated to a final pressure of 17 cm H2O. At this pressure his AHI was 4 events/hr. and the low SaO2 had increased to 89%. This final titration level occurred while he was in REM sleep. Mask used was a Respironics Classic nasal (medium-size). In summary, this split night study shows severe OSA in the pre-CPAP period, with definite improvement on high levels of CPAP. At 17 cm H2O his AHI was normal at 4 events/hr. and low SaO2 was 89%. Based on this split night study I recommend he start on nasal CPAP 17 cm H2O along with heated humidity. (Interpreted by Dr. M.)
Polysomnogram Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1] Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us [2] to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. Polysomnogram (PSG) is a multi-channel ("poly") recording ("gram") during sleep ("somno"). A doctor may order a polysomnogram because the patient has a complaint such as daytime fatigue or sleepiness that may be from interrupted sleep. Typically, doctors order a polysomnogram to diagnose or rule out obstructive sleep apnea. Although the PSG can be done during the day or night, the vast majority of sleep studies are done at night, when most people sleep. Shift workers can be accommodated in some labs by having the test at other times. For the standard test the patient comes to a sleep lab in the early evening, and over the next 1-2 hours is introduced to the setting and "wired up" so that multiple channels of data can be recorded when he/she falls asleep. The sleep lab may be in a hospital, a free-standing medical office, or in a hotel. A sleep technician is always in attendance and is responsible for attaching the electrodes to the patient and monitoring the patient during the study. # Uses A polysomnogram usually records: - 2 channels for the electroencephalogram, or EEG. The EEG is crucial for determining a) IF the patient is sleeping or not, and b) what stage of sleep the patient is in (see below for stages). EEG may be recorded from multiple areas over the head, but for most PSGs two areas are sufficient: the back (occipital channel) and top (central channel). - 1 channel to measure air flow - this is done using a thermistor or pressure probe that fits inside the nostrils - 1 channel for chin movements - this is a recording of the chin 'electroymogram' or EMG, of muscle movements about the chin area; see Electromyography - 1 channel for leg movements - this is a recording of the electromyogram or EMG for the legs (usually one channel for both legs, though some labs will separate them into 2 separate channels); see Electromyography - 2 channels for eye movments, or 'electro-oculogram' - eye movements are crucial for determining the stage of sleep known as Rapid Eye Movement or REM sleep. REM sleep is when most of our dreaming takes place. - 1 channel for EKG or electrocardiogram - records heart rate and rhythm - 1 channel for oxygen saturation - this is done with a pulse oximeter that fits over a finger tip or the ear lobe - 1 channel for chest wall movement - using a belt that wraps around the chest - 1 channel for abdominal wall movement - using another belt that wraps around the upper abdomen Thus, the typical polysomnogram has a minimum of 11 channels. (Note that this is different from the actual number of wires attached to the patient. For technical reasons, 2 wire attachments are actually required per individual recording channel in most cases.) The number of recorded channels can be more than 11 in certain situations. Some labs will measure air flow with both a thermistor and a pressure transducer (the latter considered more sensitive), so that the patient has 2 small probes in the nostrils, not one. Sometimes snoring will be recorded with a sound probe over the neck, though more commonly the sleep technician will just note snoring as "mild", "moderate" or "loud" or give a numerical estimate on "a scale of 1 to 10". Research labs and labs conducting special tests on selected patients (e.g., when nocturnal seizures are suspected) may also record additional data. Wires for each channel of recorded data lead from the patient and converge into a central box, which in turn is connected to a computer system for recording, storing and displaying all the data. During sleep the computer monitor can display multiple channels continuously. In addition, most labs have a small video camera in the room so the technician can observe the patient visually from an adjacent room. Despite all the attached wires and a new environment, most patients are able to sleep during the PSG. In fact, about the same number of patients state they slept 'as well or better' than at home, as state they slept not as well or poorly. During the study, the technician observes sleep activity by looking at the video monitor and the computer screen that displays all the data second by second. In most labs the test is completed and the patient is discharged home by 7 a.m. After the test is completed a 'scorer' (usually not the sleep technician) analyzes the data by reviewing the study in 30 second 'epochs', looking for the following information: - Onset of sleep from time the lights were turned off; this is called 'sleep latency' and normally is less than 20 minutes. (Note that determining 'sleep' and 'awake' is based solely on the EEG. Patients sometimes feel they were awake when the EEG shows they were sleeping.) - Sleep efficiency: the number of minutes of sleep divided by the number of minutes in bed. Normal is approximately 85 to 90% or higher. - Sleep stages; these are based on 3 sources of data coming from 5 channels: EEG (2 channels usually), EOG (2) and chin EMG (1). From this information each 30-second epoch is scored as 'awake' or one of 5 sleep stages: 1, 2, 3, 4 and REM or Rapid Eye Movement sleep. Stages 1-4 are together called non-REM sleep. Non-REM sleep is distinguished from REM sleep, which is altogether different. Within non-REM sleep, stages 3 and 4 are called "slow wave" sleep because of the relatively wide brain waves compared to other stages; another name for stages 3 and 4 is 'deep sleep'. By contrast, stage 1 and 2 are 'light sleep.'. The figures show Stage 4 sleep and REM sleep; each figure is a 30-second epoch from an overnight PSG. (The percentage of each sleep stage varies by age, with decreasing amounts of REM and deep sleep in older people. The majority of sleep at all ages (except infancy) is Stage 2. REM normally occupies about 20-25% of sleep time. Many factors besides age can affect both the amount and percentage of each sleep stage, including drugs (particularly anti-depressants and pain meds), alcohol taken before bed time, and sleep deprivation.) - Any breathing irregularities; mainly apneas and hypopneas. Apnea is a complete or near complete cessation of breathing for at least 10 seconds; hypopnea is a partial cessation of breathing for at least 10 seconds. - 'Arousals' are sudden shifts in brain wave activity. They may be caused by numerous factors, including breathing abnormalities, leg movements, environmental noises, etc. An abnormal number of arousals indicates 'interrupted sleep' and may explain a person's daytime symptoms of fatigue and/or sleepiness. - Cardiac rhythm abnormalities - Leg movements - Body position during sleep - Oxygen saturation during sleep Once scored, the test recording and the scoring data are sent to the sleep medicine physician for interpretation. Ideally, interpretation is done in conjunction with the medical history, a complete list of drugs the patient is taking, and any other relevant information that might impact the study such as napping done before the test. Once interpreted, the sleep physician writes a report which is sent to the referring physician, usually with specific recommendations based on the test results. # Example of summary report from overnight 'diagnostic' sleep study (PSG) Mr. J-----, age 41, 5’8” tall, 265 lbs., came to the sleep lab to rule out obstructive sleep apnea. He complains of some snoring and daytime sleepiness. His score on the Epworth Sleepiness Scale is elevated at 15 (out of possible 24 points), affirming excessive daytime sleepiness (normal is <10/24). This single-night diagnostic sleep study shows evidence for obstructive sleep apnea (OSA). For the full night his apnea+hypopnea index was elevated at 18.1 events/hr. (normal <5 events/hr; this is “moderate” OSA). While sleeping supine, his AHI was twice that, at 37.1 events/hr. He also had some oxygen desaturation; for 11% of sleep time his SaO2 was between 80% and 90%. Results of this study indicate Mr. J---- would benefit from CPAP. To this end, I recommend that he return to the lab for a CPAP titration study. (Interpreted by Dr. M.) # 'Split night' sleep studies The above report mentions CPAP as treatment for obstructive sleep apnea. CPAP is continuous positive airway pressure, and is delivered via a tight fitting mask to the patient's nose or nose & mouth (some masks cover one, some both). CPAP is typically prescribed after the diagnosis of OSA is made from a sleep study (i.e., after a PSG test). To determine the correct amount of pressure, the right mask size, and also to make sure the patient is tolerant of this therapy, a 'CPAP titration study' is recommended. This is the same as a 'PSG', but with the addition of the mask applied, so the technician can increase the airway pressure inside the mask as needed, until all (or most all) of the patient's airway obstructions are eliminated. The above report recommends Mr.J---- return for a CPAP titration study, which means return to the lab for a 2nd all night PSG (this one with the mask applied). Often, however, when a patient manifests OSA in the first 2 or 3 hours of the intial PSG, the technician will interrupt the study and apply the mask right then and there; the patient is literally woken up and fitted for a mask. The rest of the sleep study is then a 'CPAP titration.' When both the diagnostic PSG and a CPAP titration are done the same night, the entire study is called 'Split Night'. The advantages of the split night study are: 1) the patient only has to come to the lab once, so it is less disruptive than coming two different nights; 2) it is 'half as expensive' to whomever is paying for the study. The disadvantages of a split night study are 1) less time to make a diagnosis of OSA (Medicare requires a minimum of 2 hours of diagnosis time before the mask can be applied); and 2) less time to assure an adequate CPAP titration. If the titration is begun with only a few hours of sleep left, the remaining time may not assure a proper CPAP titration, and the patient may still have to return to the lab. Because of costs, more and more studies for 'sleep apnea' are attempted as split night when there is early evidence for OSA. Note that both types of study - with and without a CPAP mask - are still polysomnograms. When the CPAP mask is worn, however, the flow measurement lead in the patient's nose is removed, and a wire coming directly from the mask then measures air flow. # Example of summary report from a 'split night' sleep study (PSG) Mr. B---, age 38, 6 ft. tall, 348 lbs., came to the Hospital Sleep Lab to diagnose or rule out obstructive sleep apnea. This polysomnogram consisted of overnight recording of left and right EOG, submental EMG, left and right anterior EMG, central and occipital EEG, EKG, airflow measurement, respiratory effort and pulse oximetry. The test was done without supplemental oxygen. His latency to sleep onset was slightly prolonged at 28.5 minutes. Sleep efficiency was normal at 89.3% (413.5 minutes sleep time out of 463 minutes in bed). During the first 71 minutes of sleep Mr. B---- manifested 83 obstructive apneas, 3 central apneas, 1 mixed apnea and 28 hypopneas, for an elevated apnea+hyponea index (AHI) of 97 events/hr. (= “severe” OSA). His lowest SaO2 during the pre-CPAP period was 72%. CPAP was then applied at 5 cm H2O, and sequentially titrated to a final pressure of 17 cm H2O. At this pressure his AHI was 4 events/hr. and the low SaO2 had increased to 89%. This final titration level occurred while he was in REM sleep. Mask used was a Respironics Classic nasal (medium-size). In summary, this split night study shows severe OSA in the pre-CPAP period, with definite improvement on high levels of CPAP. At 17 cm H2O his AHI was normal at 4 events/hr. and low SaO2 was 89%. Based on this split night study I recommend he start on nasal CPAP 17 cm H2O along with heated humidity. (Interpreted by Dr. M.)
https://www.wikidoc.org/index.php/Polysomnogram
f27cefcb46255726d7e3b452895f7e15bd733ad1
wikidoc
Polythiophene
Polythiophene Polythiophenes (PTs) result from the polymerization of thiophenes, a sulfur heterocycle, that can become conducting when electrons are added or removed from the conjugated π-orbitals via doping. The study of polythiophenes has intensified over the last three decades. The maturation of the field of conducting polymers was confirmed by the awarding of the 2000 Nobel Prize in Chemistry to Alan Heeger, Alan MacDiarmid, and Hideki Shirakawa “for the discovery and development of conductive polymers." The most notable property of these materials, electrical conductivity, results from the delocalization of electrons along the polymer backbone – hence the term “synthetic metals”. But, conductivity is not the only interesting property resulting from electron delocalization. The optical properties of these materials respond to environmental stimuli, with dramatic color shifts in response to changes in solvent, temperature, applied potential, and binding to other molecules. Both color changes and conductivity changes are induced by the same mechanism—twisting of the polymer backbone, disrupting conjugation—making conjugated polymers attractive as sensors that can provide a range of optical and electronic responses. A number of comprehensive reviews have been published on PTs, the earliest dating from 1981. Schopf and Koßmehl published a comprehensive review of the literature published between 1990 and 1994. Roncali surveyed electrochemical synthesis in 1992, and the electronic properties of substituted PTs in 1997. McCullough’s 1998 review focussed on chemical synthesis of conducting PTs. A general review of conjugated polymers from the 1990s was conducted by Reddinger and Reynolds in 1999. Finally, Swager et al. examined conjugated-polymer-based chemical sensors in 2000. These reviews are an excellent guide to the highlights of the primary PT literature from the last two decades. # Mechanism of conductivity and doping Electrons are delocalized along the conjugated backbones of conducting polymers, usually through overlap of π-orbitals, resulting in an extended π-system with a filled valence band. By removing electrons from the π-system (“p-doping”), or adding electrons into the π-system (“n-doping”), a charged unit called a bipolaron is formed (see Figure 1). Doping is performed at much higher levels (20–40%) in conducting polymers than in semiconductors (<1%). The bipolaron moves as a unit up and down the polymer chain, and is responsible for the macroscopically observed conductivity of the polymer. For some samples of poly(3-dodecylthiophene) doped with iodine, the conductivity can approach 1000 S/cm. (In comparison, the conductivity of copper is approximately 5×105 S/cm.) Generally, the conductivity of PTs is lower than 1000 S/cm, but high conductivity is not necessary for many applications of conducting polymers (see below for examples). Simultaneous oxidation of the conducting polymer and introduction of counterions, p-doping, can be accomplished electrochemically or chemically. During the electrochemical synthesis of a PT, counterions dissolved in the solvent can associate with the polymer as it is deposited onto the electrode in its oxidized form. By doping the polymer as it is synthesized, a thick film can build up on an electrode—the polymer conducts electrons from the substrate to the surface of the film. Alternatively, a neutral conducting polymer film or solution can be doped post-synthesis. Reduction of the conducting polymer, n-doping, is much less common than p-doping. An early study of electrochemical n-doping of poly(bithiophene) found that the n-doping levels are less than those of p-doping, the n-doping cycles were less efficient, the number of cycles required to reach maximum doping was higher, and the n-doping process appeared to be kinetically limited, possibly due to counterion diffusion in the polymer. A variety of reagents have been used to dope PTs. Iodine and bromine produce high conductivities but are unstable and slowly evaporate from the material. Organic acids, including trifluoroacetic acid, propionic acid, and sulfonic acids produce PTs with lower conductivities than iodine, but with higher environmental stabilities. Oxidative polymerization with ferric chloride can result in doping by residual catalyst, although matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) studies have shown that poly(3-hexylthiophene)s are also partially halogenated by the residual oxidizing agent. Poly(3-octylthiophene) dissolved in toluene can be doped by solutions of ferric chloride hexahydrate dissolved in acetonitrile, and can be cast into films with conductivities reaching 1 S/cm. Other, less common p-dopants include gold trichloride and trifluoromethanesulfonic acid. # Structure and optical properties ## Conjugation length and chromisms The extended π-systems of conjugated PTs produce some of the most interesting properties of these materials—their optical properties. As an approximation, the conjugated backbone can be considered as a real-world example of the “electron-in-a-box” solution to the Schrödinger equation; however, the development of refined models to accurately predict absorption and fluorescence spectra of well-defined oligo(thiophene) systems is ongoing. Conjugation relies upon overlap of the π-orbitals of the aromatic rings, which, in turn, requires the thiophene rings to be coplanar (see Figure 2, top). The number of coplanar rings determines the conjugation length—the longer the conjugation length, the lower the separation between adjacent energy levels, and the longer the absorption wavelength. Deviation from coplanarity may be permanent, resulting from mislinkages during synthesis or especially bulky side chains; or temporary, resulting from changes in the environment or binding. This twist in the backbone reduces the conjugation length (see Figure 2, bottom), and the separation between energy levels is increased. This results in a shorter absorption wavelength. Determining the maximum effective conjugation length requires the synthesis of regioregular PTs of defined length. The absorption band in the visible region is increasingly red-shifted as the conjugation length increases, and the maximum effective conjugation length is calculated as the saturation point of the red-shift. Early studies by ten Hoeve et al. estimated that the effective conjugation extended over 11 repeat units, while later studies increased this estimate to 20 units. More recently, Otsubo et al. synthesized 48- and 96-mer oligothiophenes, and found that the red-shift, while small (a difference of 0.1 nm between the 72- and the 96-mer), does not saturate, meaning that the effective conjugation length may be even longer than 96 units. A variety of environmental factors can cause the conjugated backbone to twist, reducing the conjugation length and causing an absorption band shift, including solvent, temperature, application of an electric field, and dissolved ions. The absorption band of poly (3-thiophene acetic acid) in aqueous solutions of poly(vinyl alcohol) (PVA) shifts from 480 nm at pH 7 to 415 nm at pH 4. This is attributed to formation of a compact coil structure which can form hydrogen bonds with PVA upon partial deprotonation of the acetic acid group. Chiral PTs showed no induced circular dichroism (ICD) in chloroform, but displayed intense, but opposite, ICDs in chloroform–acetonitrile mixtures versus chloroform–acetone mixtures. Also, a PT with a chiral amino acid side chain displayed moderate absorption band shifts and ICDs, depending upon the pH and the concentration of buffer. Shifts in PT absorption bands due to changes in temperature result from a conformational transition from a coplanar, rodlike structure at lower temperatures to a nonplanar, coiled structure at elevated temperatures. For example, poly(3-(octyloxy)-4-methylthiophene) undergoes a color change from red–violet at 25 °C to pale yellow at 150 °C. An isosbestic point (a point where the absorbance curves at all temperatures overlap) indicates coexistence between two phases, which may exist on the same chain or on different chains. Not all thermochromic PTs exhibit an isosbestic point: highly regioregular poly(3-alkylthiophene)s (PATs) show a continuous blue-shift with increasing temperature if the side chains are short enough so that they do not melt and interconvert between crystalline and disordered phases at low temperatures. Finally, PTs can exhibit absorption shifts due to application of electric potentials (electrochromism), or to introduction of alkali ions (ionochromism). These effects will be discussed in the context of applications of PTs below. ## Regioregularity The asymmetry of 3-substituted thiophenes results in three possible couplings when two monomers are linked between the 2- and the 5-positions. These couplings are: - 2,5’, or head–tail (HT), coupling - 2,2’, or head–head (HH), coupling - 5,5’, or tail–tail (TT), coupling These three diads can be combined into four distinct triads, shown in Figure 3. The triads are distinguishable by NMR spectroscopy, and the degree of regioregularity can be estimated by integration. Elsenbaumer et al. first noticed the effect of regioregularity on the properties of PTs. A regiorandom copolymer of 3-methylthiophene and 3-butylthiophene possessed a conductivity of 50 S/cm, while a more regioregular copolymer with a 2:1 ratio of HT to HH couplings had a higher conductivity of 140 S/cm. Films of regioregular poly(3-(4-octylphenyl)thiophene) (POPT) with greater than 94% HT content possessed conductivities of 4 S/cm, compared with 0.4 S/cm for regioirregular POPT. PATs prepared using Rieke zinc formed “crystalline, flexible, and bronze-colored films with a metallic luster." On the other hand, the corresponding regiorandom polymers produced “amorphous and orange-colored films.” Comparison of the thermochromic properties of the Rieke PATs showed that, while the regioregular polymers showed strong thermochromic effects, the absorbance spectra of the regioirregular polymers did not change significantly at elevated temperatures. This was likely due to the formation of only weak and localized conformational defects. Finally, Xu and Holdcroft demonstrated that the fluorescence absorption and emission maxima of poly(3-hexylthiophene)s occur at increasingly lower wavelengths (higher energy) with increasing HH dyad content. The difference between absorption and emission maxima, the Stokes shift, also increases with HH dyad content, which they attributed to greater relief from conformational strain in the first excited state. ## Solubility Unsubstituted PTs are conductive after doping, and have excellent environmental stability compared with some other conducting polymers such as polyacetylene, but are intractable and soluble only in solutions like mixtures of arsenic trifluoride and arsenic pentafluoride. However, in 1987 examples of organic-soluble PTs were reported. Elsenbaumer et al., using a nickel-catalyzed Grignard cross-coupling, synthesized two soluble PTs, poly(3-butylthiophene) and poly(3-methylthiophene-'co'-3’-octylthiophene), which could be cast into films and doped with iodine to reach conductivities of 4 to 6 S/cm. Hotta et al. synthesized poly(3-butylthiophene) and poly(3-hexylthiophene) electrochemically (and later chemically), and characterized the polymers in solution and cast into films. The soluble PATs demonstrated both thermochromism and solvatochromism (see above) in chloroform and 2,5-dimethyltetrahydrofuran. Also in 1987, Wudl et al. reported the syntheses of water-soluble sodium poly(3-thiophenealkanesulfonate)s. In addition to conferring water solubility, the pendant sulfonate groups act as counterions, producing self-doped conducting polymers. Substituted PTs with tethered carboxylic acids, acetic acids, amino acids, and urethanes are also water-soluble. More recently, poly(3-(perfluorooctyl)thiophene)s soluble in supercritical carbon dioxide were electrochemically and chemically synthesized by Collard et al. Finally, unsubstituted oligothiophenes capped at both ends with thermally-labile alkyl esters were cast as films from solution, and then heated to remove the solublizing end groups. Atomic force microscopy (AFM) images showed a significant increase in long-range order after heating. # Synthesis PTs can be synthesized electrochemically, by applying a potential across a solution of the monomer to be polymerized, or chemically, using oxidants or cross-coupling catalysts. Both methods have their advantages and disadvantages. ## Electrochemical synthesis In an electrochemical polymerization, a potential is applied across a solution containing thiophene and an electrolyte, producing a conductive PT film on the anode. Electrochemical polymerization is convenient, since the polymer does not need to be isolated and purified, but it produces structures with varying degrees of structural irregularities, such as crosslinking. As shown in Figure 4, oxidation of a monomer produces a radical cation, which can then couple with a second radical cation to form a dication dimer, or with another monomer to produce a radical cation dimer. A number of techniques including in situ video microscopy, cyclic spectrovoltammetry, photocurrent spectroscopy, and electrochemical quartz crystal microbalance measurements, have been used to elucidate the nucleation and growth mechanism leading to deposition of polymer onto the anode. Deposition of long, well-ordered chains onto the electrode surface is followed by growth of either long, flexible chains, or shorter, more crosslinked chains, depending upon the polymerization conditions. The quality of an electrochemically prepared PT film is affected by a number of factors. These include the electrode material, current density, temperature, solvent, electrolyte, presence of water, and monomer concentration. Two other important but interrelated factors are the structure of the monomer and the applied potential. The potential required to oxidize the monomer depends upon the electron density in the thiophene ring π-system. Electron-donating groups lower the oxidation potential, while electron-withdrawing groups increase the oxidation potential. Thus, 3-methylthiophene polymerizes in acetonitrile and tetrabutylammonium tetrafluoroborate at a potential of about 1.5 V vs. SCE (saturated calomel electrode), while unsubstituted thiophene polymerizes at about 1.7 V vs. SCE. Steric hindrance resulting from branching at the α-carbon of a 3-substituted thiophene inhibits polymerization. This observation leads to the so-called “polythiophene paradox”: the oxidation potential of many thiophene monomers is higher than the oxidation potential of the resulting polymer. In other words, the polymer can be irreversibly oxidized and decompose at a rate comparable to the polymerization of the corresponding monomer. This remains one of the major disadvantages of electrochemical polymerization, and limits its application for many thiophene monomers with complex side groups. ## Chemical synthesis Chemical synthesis offers two advantages compared with electrochemical synthesis of PTs: a greater selection of monomers, and, using the proper catalysts, the ability to synthesize perfectly regioregular substituted PTs. While PTs may have been chemically synthesized by accident more than a century ago, the first planned chemical syntheses using metal-catalyzed polymerization of 2,5-dibromothiophene were reported by two groups independently in 1980. Yamamoto et al. used magnesium in tetrahydrofuran (THF) and nickel(bipyridine) dichloride, analogous to the Kumada coupling of Grignard reagents to aryl halides. Lin and Dudek also used magnesium in THF, but with a series of acetylacetonate catalysts (Pd(acac)2, Ni(acac)2, Co(acac)2, and Fe(acac)3). Later developments produced higher molecular weight PTs than those initial efforts, and can be grouped into two categories based on their structure. Regioregular PTs can be synthesized by catalytic cross-coupling reactions of bromothiophenes, while polymers with varying degrees of regioregularity can be simply synthesized by oxidative polymerization. The first synthesis of perfectly regioregular PATs was described by McCullough et al. in 1992. As shown in Figure 5 (top), selective bromination produces 2-bromo-3-alkylthiophene, which is followed by transmetallation and then Kumada cross-coupling in the presence of a nickel catalyst. This method produces approximately 100% HT–HT couplings, according to NMR spectroscopy analysis of the diads. In the method subsequently described by Rieke et al. in 1993, 2,5-dibromo-3-alkylthiophene is treated with highly reactive “Rieke zinc" to form a mixture of organometallic isomers (Figure 5, bottom). Addition of a catalytic amount of Pd(PPh3)4 produces a regiorandom polymer, but treatment with Ni(dppe)Cl2 yields regioregular PAT in quantitative yield. While the McCullough and Rieke methods produce structurally homogenous PATs, they require low temperatures, the careful exclusion of water and oxygen, and brominated monomers. In contrast, the oxidative polymerization of thiophenes using ferric chloride described by Sugimoto in 1986 can be performed at room temperature under less demanding conditions. This method has proven to be extremely popular; H.C. Stark's antistatic coating Baytron P is prepared on a commercial scale using ferric chloride (see below). A number of studies have been conducted in attempts to improve the yield and quality of the product obtained using the oxidative polymerization technique. In addition to ferric chloride, other oxidizing agents, including ferric chloride hydrate, copper perchlorate, and iron perchlorate have also been used successfully to polymerize 2,2’-bithiophene. Slow addition of ferric chloride to the monomer solution produced poly(3-(4-octylphenyl)thiophene)s with approximately 94% H–T content. Precipitation of ferric chloride in situ (in order to maximize the surface area of the catalyst) produced significantly higher yields and monomer conversions than adding monomer directly to crystalline catalyst. Higher molecular weights were reported when dry air was bubbled through the reaction mixture during polymerization. Exhaustive Soxhlet extraction after polymerization with polar solvents was found to effectively fractionate the polymer and remove residual catalyst before NMR spectroscopy. Using a lower ratio of catalyst to monomer (2:1, rather than 4:1) may increase the regioregularity of poly(3-dodecylthiophene)s. Andreani et al. reported higher yields of soluble poly(dialkylterthiophene)s in carbon tetrachloride rather than chloroform, which they attributed to the stability of the radical species in carbon tetrachloride. Higher-quality catalyst, added at a slower rate and at reduced temperature, was shown to produce high molecular weight PATs with no insoluble polymer residue. Laakso et al. used a factorial design to determine that increasing the ratio of catalyst to monomer increased the yield of poly(3-octylthiophene), and claimed that a longer polymerization time also increased the yield. The mechanism of the oxidative polymerization using ferric chloride has been controversial. Sugimoto et al. did not speculate on a mechanism in their 1986 report. In 1992, Niemi et al. proposed a radical mechanism, shown in Figure 6(top). They based their mechanism on two assumptions. First, since they observed polymerization only in solvents where the catalyst was either partially or completely insoluble (chloroform, toluene, carbon tetrachloride, pentane, and hexane, and not diethyl ether, xylene, acetone, or formic acid), they concluded that the active sites of the polymerization must be at the surface of solid ferric chloride. Therefore, they discounted the possibilities of either two radical cations reacting with each other, or two radicals reacting with each other, “because the chloride ions at the surface of the crystal would prevent the radical cations or radicals from assuming positions suitable for dimerization.” Second, using 3-methylthiophene as a prototypical monomer, they performed quantum mechanical calculations to determine the energies and the total atomic charges on the carbon atoms of the four possible polymerization species (neutral 3-methylthiophene, the radical cation, the radical on carbon 2, and the radical on carbon 5). 3-methylthiophene Since the most negative carbon of the neutral 3-methylthiophene is also carbon 2, and the carbon with the highest odd electron population of the radical cation is carbon 2, they concluded that a radical cation mechanism would lead to mostly 2–2, H–H links. They then calculated the total energies of the species with the radicals at the 2 and the 5 carbons, and found that the latter was more stable by 1.5 kJ/mol. Therefore, the more stable radical could react with the neutral species, forming head-to-tail couplings as shown in Figure 6 (top). Andersson et al. offered an alternative mechanism in the course of their studies of the polymerization of 3-(4-octylphenyl)thiophene with ferric chloride, where they found a high degree of regioregularity when the catalyst was added to the monomer mixture slowly. They concluded that, given the selectivity of the couplings, and the strong oxidizing conditions, the reaction could proceed via a carbocation mechanism (Figure 6, middle). The radical mechanism was directly challenged in a short communication in 1995, when Olinga and François noted that thiophene could be polymerized by ferric chloride in acetonitrile, a solvent in which the catalyst is completely soluble. Their analysis of the kinetics of thiophene polymerization also seemed to contradict the predictions of the radical polymerization mechanism. Barbarella et al. studied the oligomerization of 3-(alkylsulfanyl)thiophenes, and concluded from their quantum mechanical calculations, and considerations of the enhanced stability of the radical cation when delocalized over a planar conjugated oligomer, that a radical cation mechanism analogous to that generally accepted for electrochemical polymerization was more likely (Figure 6, bottom). Given the difficulties of studying a system with a heterogeneous, strongly oxidizing catalyst that produces difficult to characterize rigid-rod polymers, the mechanism of oxidative polymerization is by no means decided. However, the radical cation mechanism shown in Figure 6 is generally accepted as the most likely route for PT synthesis. # Applications A number of applications have been proposed for conducting PTs, including field-effect transistors, electroluminescent devices, solar cells, photochemical resists, nonlinear optic devices, batteries, and diodes. In general, there are two categories of applications for conducting polymers. Static applications rely upon the intrinsic conductivity of the materials, combined with their ease of processing and material properties common to polymeric materials. Dynamic applications utilize changes in the conductive and optical properties, resulting either from application of electric potentials or from environmental stimuli. As an example of a static application, H.C. Stark’s poly(3,4-ethylenedioxythiophene)-poly(styrene sulfonate) (PEDOT-PSS) product Baytron P (Figure 7) has been extensively used as an antistatic coating (as packaging materials for electronic components, for example). AGFA coats 200 m × 10 m of photographic film per year with Baytron because of its antistatic properties. The thin layer of Baytron is virtually transparent and colorless, prevents electrostatic discharges during film rewinding, and reduces dust buildup on the negatives after processing. PEDOT can also be used in dynamic applications where a potential is applied to a polymer film. The electrochromic properties of PEDOT are used to manufacture windows and mirrors which can become opaque or reflective upon the application of an electric potential. Widespread adoption of electrochromic windows could save billions of dollars per year in air conditioning costs. Finally, Phillips has commercialized a mobile phone with an electrically switchable PEDOT mirror (image). The use of PTs as sensors responding to an analyte has also been the subject of intense research. In addition to biosensor applications, PTs can also be functionalized with synthetic receptors for detecting metal ions or chiral molecules as well. PTs with pendant and main-chain crown ether functionalities were reported in 1993 by the research groups of Bäuerle and Swager, respectively (Figure 8). Electrochemically polymerized thin films of the Bäuerle pendant crown ether PT were exposed to millimolar concentrations of alkali cations (Li, Na, and K). The current which passed through the film at a fixed potential dropped dramatically in lithium ion solutions, less so for sodium ion solutions, and only slightly for potassium ion solutions. The Swager main chain crown ether PTs were prepared by chemical coupling and characterized by absorbance spectroscopy. Addition of the same alkali cations resulted in absorbance shifts of 46 nm (Li), 91 nm (Na), and 22 nm (K). The size of the shifts corresponds to the ion-binding preferences of the corresponding crown ether, resulting from a twist in the conjugated polymer backbone induced by ion binding. In the course of their studies of the optical properties of chiral PTs, Yashima and Goto found that a PT with a chiral primary amine (Figure 9) was sensitive to chiral amino alcohols, producing mirror-image-split ICD responses in the π–transition region. This was the first example of chiral recognition by PTs using a chiral detection method (CD spectroscopy). This distinguished it from earlier work by Lemaire et al. who used an achiral detection method (cyclic voltammetry) to detect incorporation of chiral dopant anions into an electrochemically polymerized chiral PT. # Active Research Groups - Richard McCullough group, Carnegie Mellon. - Tobin Marks group, Northwestern. - John Reynolds group, University of Florida. - Timothy Swager group, MIT. - Ivan Oleynik group, University of South Florida. - Dhandapani Venkataraman group, University of Massachusetts, Amherst. - Gregory Sotzing's group, University of Connecticut, Storrs. - Jean Frechet's group, Jean Frechet, University of California, Berkeley. - Michael McGehee group, Stanford University. - Ron Noftle group, Wake Forest University # Further reading - Handbook of Conducting Polymers (Eds: T. A. Skotheim, R. L. Elsenbaumer, J. R. Reynolds), Marcel Dekker, New York 1998. ISBN 0-8247-0050-3. - G. Schopf, G. Koßmehl, Polythiophenes: Electrically Conductive Polymers, Springer, Berlin 1997. ISBN 3-540-61483-4; ISBN 0-387-61483-4. - Synthetic Metals (journal). ISSN 0379-6779.
Polythiophene Polythiophenes (PTs) result from the polymerization of thiophenes, a sulfur heterocycle, that can become conducting when electrons are added or removed from the conjugated π-orbitals via doping. The study of polythiophenes has intensified over the last three decades. The maturation of the field of conducting polymers was confirmed by the awarding of the 2000 Nobel Prize in Chemistry to Alan Heeger, Alan MacDiarmid, and Hideki Shirakawa “for the discovery and development of conductive polymers." The most notable property of these materials, electrical conductivity, results from the delocalization of electrons along the polymer backbone – hence the term “synthetic metals”. But, conductivity is not the only interesting property resulting from electron delocalization. The optical properties of these materials respond to environmental stimuli, with dramatic color shifts in response to changes in solvent, temperature, applied potential, and binding to other molecules. Both color changes and conductivity changes are induced by the same mechanism—twisting of the polymer backbone, disrupting conjugation—making conjugated polymers attractive as sensors that can provide a range of optical and electronic responses. A number of comprehensive reviews have been published on PTs, the earliest dating from 1981.[1] Schopf and Koßmehl published a comprehensive review of the literature published between 1990 and 1994.[2] Roncali surveyed electrochemical synthesis in 1992,[3] and the electronic properties of substituted PTs in 1997.[4] McCullough’s 1998 review focussed on chemical synthesis of conducting PTs.[5] A general review of conjugated polymers from the 1990s was conducted by Reddinger and Reynolds in 1999.[6] Finally, Swager et al. examined conjugated-polymer-based chemical sensors in 2000.[7] These reviews are an excellent guide to the highlights of the primary PT literature from the last two decades. # Mechanism of conductivity and doping Electrons are delocalized along the conjugated backbones of conducting polymers, usually through overlap of π-orbitals, resulting in an extended π-system with a filled valence band. By removing electrons from the π-system (“p-doping”), or adding electrons into the π-system (“n-doping”), a charged unit called a bipolaron is formed (see Figure 1). Doping is performed at much higher levels (20–40%) in conducting polymers than in semiconductors (<1%). The bipolaron moves as a unit up and down the polymer chain, and is responsible for the macroscopically observed conductivity of the polymer. For some samples of poly(3-dodecylthiophene) doped with iodine, the conductivity can approach 1000 S/cm.[8] (In comparison, the conductivity of copper is approximately 5×105 S/cm.) Generally, the conductivity of PTs is lower than 1000 S/cm, but high conductivity is not necessary for many applications of conducting polymers (see below for examples). Simultaneous oxidation of the conducting polymer and introduction of counterions, p-doping, can be accomplished electrochemically or chemically. During the electrochemical synthesis of a PT, counterions dissolved in the solvent can associate with the polymer as it is deposited onto the electrode in its oxidized form. By doping the polymer as it is synthesized, a thick film can build up on an electrode—the polymer conducts electrons from the substrate to the surface of the film. Alternatively, a neutral conducting polymer film or solution can be doped post-synthesis. Reduction of the conducting polymer, n-doping, is much less common than p-doping. An early study of electrochemical n-doping of poly(bithiophene) found that the n-doping levels are less than those of p-doping, the n-doping cycles were less efficient, the number of cycles required to reach maximum doping was higher, and the n-doping process appeared to be kinetically limited, possibly due to counterion diffusion in the polymer.[9] A variety of reagents have been used to dope PTs. Iodine and bromine produce high conductivities[8] but are unstable and slowly evaporate from the material.[10] Organic acids, including trifluoroacetic acid, propionic acid, and sulfonic acids produce PTs with lower conductivities than iodine, but with higher environmental stabilities.[10][11] Oxidative polymerization with ferric chloride can result in doping by residual catalyst,[12] although matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) studies have shown that poly(3-hexylthiophene)s are also partially halogenated by the residual oxidizing agent.[13] Poly(3-octylthiophene) dissolved in toluene can be doped by solutions of ferric chloride hexahydrate dissolved in acetonitrile, and can be cast into films with conductivities reaching 1 S/cm.[14] Other, less common p-dopants include gold trichloride[15] and trifluoromethanesulfonic acid.[16] # Structure and optical properties ## Conjugation length and chromisms The extended π-systems of conjugated PTs produce some of the most interesting properties of these materials—their optical properties. As an approximation, the conjugated backbone can be considered as a real-world example of the “electron-in-a-box” solution to the Schrödinger equation; however, the development of refined models to accurately predict absorption and fluorescence spectra of well-defined oligo(thiophene) systems is ongoing.[17] Conjugation relies upon overlap of the π-orbitals of the aromatic rings, which, in turn, requires the thiophene rings to be coplanar (see Figure 2, top). The number of coplanar rings determines the conjugation length—the longer the conjugation length, the lower the separation between adjacent energy levels, and the longer the absorption wavelength. Deviation from coplanarity may be permanent, resulting from mislinkages during synthesis or especially bulky side chains; or temporary, resulting from changes in the environment or binding. This twist in the backbone reduces the conjugation length (see Figure 2, bottom), and the separation between energy levels is increased. This results in a shorter absorption wavelength. Determining the maximum effective conjugation length requires the synthesis of regioregular PTs of defined length. The absorption band in the visible region is increasingly red-shifted as the conjugation length increases, and the maximum effective conjugation length is calculated as the saturation point of the red-shift. Early studies by ten Hoeve et al. estimated that the effective conjugation extended over 11 repeat units,[18] while later studies increased this estimate to 20 units.[19] More recently, Otsubo et al. synthesized 48-[20] and 96-mer[21] oligothiophenes, and found that the red-shift, while small (a difference of 0.1 nm between the 72- and the 96-mer), does not saturate, meaning that the effective conjugation length may be even longer than 96 units.[21] A variety of environmental factors can cause the conjugated backbone to twist, reducing the conjugation length and causing an absorption band shift, including solvent, temperature, application of an electric field, and dissolved ions. The absorption band of poly (3-thiophene acetic acid) in aqueous solutions of poly(vinyl alcohol) (PVA) shifts from 480 nm at pH 7 to 415 nm at pH 4. This is attributed to formation of a compact coil structure which can form hydrogen bonds with PVA upon partial deprotonation of the acetic acid group.[22] Chiral PTs showed no induced circular dichroism (ICD) in chloroform, but displayed intense, but opposite, ICDs in chloroform–acetonitrile mixtures versus chloroform–acetone mixtures.[23] Also, a PT with a chiral amino acid side chain[24] displayed moderate absorption band shifts and ICDs, depending upon the pH and the concentration of buffer.[25] Shifts in PT absorption bands due to changes in temperature result from a conformational transition from a coplanar, rodlike structure at lower temperatures to a nonplanar, coiled structure at elevated temperatures. For example, poly(3-(octyloxy)-4-methylthiophene) undergoes a color change from red–violet at 25 °C to pale yellow at 150 °C. An isosbestic point (a point where the absorbance curves at all temperatures overlap) indicates coexistence between two phases, which may exist on the same chain or on different chains.[26] Not all thermochromic PTs exhibit an isosbestic point: highly regioregular poly(3-alkylthiophene)s (PATs) show a continuous blue-shift with increasing temperature if the side chains are short enough so that they do not melt and interconvert between crystalline and disordered phases at low temperatures.[citation needed] Finally, PTs can exhibit absorption shifts due to application of electric potentials (electrochromism),[27] or to introduction of alkali ions (ionochromism).[28] These effects will be discussed in the context of applications of PTs below. ## Regioregularity The asymmetry of 3-substituted thiophenes results in three possible couplings when two monomers are linked between the 2- and the 5-positions. These couplings are: - 2,5’, or head–tail (HT), coupling - 2,2’, or head–head (HH), coupling - 5,5’, or tail–tail (TT), coupling These three diads can be combined into four distinct triads, shown in Figure 3. The triads are distinguishable by NMR spectroscopy, and the degree of regioregularity can be estimated by integration.[29][30] Elsenbaumer et al. first noticed the effect of regioregularity on the properties of PTs. A regiorandom copolymer of 3-methylthiophene and 3-butylthiophene possessed a conductivity of 50 S/cm, while a more regioregular copolymer with a 2:1 ratio of HT to HH couplings had a higher conductivity of 140 S/cm.[31] Films of regioregular poly(3-(4-octylphenyl)thiophene) (POPT) with greater than 94% HT content possessed conductivities of 4 S/cm, compared with 0.4 S/cm for regioirregular POPT.[32] PATs prepared using Rieke zinc formed “crystalline, flexible, and bronze-colored films with a metallic luster." On the other hand, the corresponding regiorandom polymers produced “amorphous and orange-colored films.”[33] Comparison of the thermochromic properties of the Rieke PATs showed that, while the regioregular polymers showed strong thermochromic effects, the absorbance spectra of the regioirregular polymers did not change significantly at elevated temperatures. This was likely due to the formation of only weak and localized conformational defects.[citation needed] Finally, Xu and Holdcroft demonstrated that the fluorescence absorption and emission maxima of poly(3-hexylthiophene)s occur at increasingly lower wavelengths (higher energy) with increasing HH dyad content. The difference between absorption and emission maxima, the Stokes shift, also increases with HH dyad content, which they attributed to greater relief from conformational strain in the first excited state.[34] ## Solubility Unsubstituted PTs are conductive after doping, and have excellent environmental stability compared with some other conducting polymers such as polyacetylene, but are intractable and soluble only in solutions like mixtures of arsenic trifluoride and arsenic pentafluoride.[35] However, in 1987 examples of organic-soluble PTs were reported. Elsenbaumer et al., using a nickel-catalyzed Grignard cross-coupling, synthesized two soluble PTs, poly(3-butylthiophene) and poly(3-methylthiophene-'co'-3’-octylthiophene), which could be cast into films and doped with iodine to reach conductivities of 4 to 6 S/cm.[36] Hotta et al. synthesized poly(3-butylthiophene) and poly(3-hexylthiophene) electrochemically[37] (and later chemically[38]), and characterized the polymers in solution[39] and cast into films.[40] The soluble PATs demonstrated both thermochromism and solvatochromism (see above) in chloroform and 2,5-dimethyltetrahydrofuran.[41] Also in 1987, Wudl et al. reported the syntheses of water-soluble sodium poly(3-thiophenealkanesulfonate)s.[42] In addition to conferring water solubility, the pendant sulfonate groups act as counterions, producing self-doped conducting polymers. Substituted PTs with tethered carboxylic acids,[43] acetic acids,[44] amino acids,[24] and urethanes[45] are also water-soluble. More recently, poly(3-(perfluorooctyl)thiophene)s soluble in supercritical carbon dioxide[46] were electrochemically and chemically synthesized by Collard et al.[47] Finally, unsubstituted oligothiophenes capped at both ends with thermally-labile alkyl esters were cast as films from solution, and then heated to remove the solublizing end groups. Atomic force microscopy (AFM) images showed a significant increase in long-range order after heating.[48] # Synthesis PTs can be synthesized electrochemically, by applying a potential across a solution of the monomer to be polymerized, or chemically, using oxidants or cross-coupling catalysts. Both methods have their advantages and disadvantages. ## Electrochemical synthesis In an electrochemical polymerization, a potential is applied across a solution containing thiophene and an electrolyte, producing a conductive PT film on the anode.[citation needed] Electrochemical polymerization is convenient, since the polymer does not need to be isolated and purified, but it produces structures with varying degrees of structural irregularities, such as crosslinking. As shown in Figure 4, oxidation of a monomer produces a radical cation, which can then couple with a second radical cation to form a dication dimer, or with another monomer to produce a radical cation dimer. A number of techniques including in situ video microscopy,[49] cyclic spectrovoltammetry,[50] photocurrent spectroscopy,[51] and electrochemical quartz crystal microbalance measurements,[52] have been used to elucidate the nucleation and growth mechanism leading to deposition of polymer onto the anode. Deposition of long, well-ordered chains onto the electrode surface is followed by growth of either long, flexible chains, or shorter, more crosslinked chains, depending upon the polymerization conditions. The quality of an electrochemically prepared PT film is affected by a number of factors. These include the electrode material, current density, temperature, solvent, electrolyte, presence of water, and monomer concentration.[2] Two other important but interrelated factors are the structure of the monomer and the applied potential. The potential required to oxidize the monomer depends upon the electron density in the thiophene ring π-system. Electron-donating groups lower the oxidation potential, while electron-withdrawing groups increase the oxidation potential. Thus, 3-methylthiophene polymerizes in acetonitrile and tetrabutylammonium tetrafluoroborate at a potential of about 1.5 V vs. SCE (saturated calomel electrode), while unsubstituted thiophene polymerizes at about 1.7 V vs. SCE. Steric hindrance resulting from branching at the α-carbon of a 3-substituted thiophene inhibits polymerization.[53] This observation leads to the so-called “polythiophene paradox”: the oxidation potential of many thiophene monomers is higher than the oxidation potential of the resulting polymer. In other words, the polymer can be irreversibly oxidized and decompose at a rate comparable to the polymerization of the corresponding monomer.[citation needed] This remains one of the major disadvantages of electrochemical polymerization, and limits its application for many thiophene monomers with complex side groups. ## Chemical synthesis Chemical synthesis offers two advantages compared with electrochemical synthesis of PTs: a greater selection of monomers, and, using the proper catalysts, the ability to synthesize perfectly regioregular substituted PTs. While PTs may have been chemically synthesized by accident more than a century ago,[54] the first planned chemical syntheses using metal-catalyzed polymerization of 2,5-dibromothiophene were reported by two groups independently in 1980. Yamamoto et al. used magnesium in tetrahydrofuran (THF) and nickel(bipyridine) dichloride, analogous to the Kumada coupling of Grignard reagents to aryl halides.[55] Lin and Dudek also used magnesium in THF, but with a series of acetylacetonate catalysts (Pd(acac)2, Ni(acac)2, Co(acac)2, and Fe(acac)3).[56] Later developments produced higher molecular weight PTs than those initial efforts, and can be grouped into two categories based on their structure. Regioregular PTs can be synthesized by catalytic cross-coupling reactions of bromothiophenes, while polymers with varying degrees of regioregularity can be simply synthesized by oxidative polymerization. The first synthesis of perfectly regioregular PATs was described by McCullough et al. in 1992.[57] As shown in Figure 5 (top), selective bromination produces 2-bromo-3-alkylthiophene, which is followed by transmetallation and then Kumada cross-coupling in the presence of a nickel catalyst. This method produces approximately 100% HT–HT couplings, according to NMR spectroscopy analysis of the diads. In the method subsequently described by Rieke et al. in 1993,[58] 2,5-dibromo-3-alkylthiophene is treated with highly reactive “Rieke zinc"[59] to form a mixture of organometallic isomers (Figure 5, bottom). Addition of a catalytic amount of Pd(PPh3)4 produces a regiorandom polymer, but treatment with Ni(dppe)Cl2 yields regioregular PAT in quantitative yield.[60] While the McCullough and Rieke methods produce structurally homogenous PATs, they require low temperatures, the careful exclusion of water and oxygen, and brominated monomers. In contrast, the oxidative polymerization of thiophenes using ferric chloride described by Sugimoto in 1986 can be performed at room temperature under less demanding conditions.[61] This method has proven to be extremely popular; H.C. Stark's antistatic coating Baytron P is prepared on a commercial scale using ferric chloride (see below).[62] A number of studies have been conducted in attempts to improve the yield and quality of the product obtained using the oxidative polymerization technique. In addition to ferric chloride, other oxidizing agents, including ferric chloride hydrate, copper perchlorate, and iron perchlorate have also been used successfully to polymerize 2,2’-bithiophene.[63] Slow addition of ferric chloride to the monomer solution produced poly(3-(4-octylphenyl)thiophene)s with approximately 94% H–T content.[32] Precipitation of ferric chloride in situ (in order to maximize the surface area of the catalyst) produced significantly higher yields and monomer conversions than adding monomer directly to crystalline catalyst.[64][65] Higher molecular weights were reported when dry air was bubbled through the reaction mixture during polymerization.[66] Exhaustive Soxhlet extraction after polymerization with polar solvents was found to effectively fractionate the polymer and remove residual catalyst before NMR spectroscopy.[29] Using a lower ratio of catalyst to monomer (2:1, rather than 4:1) may increase the regioregularity of poly(3-dodecylthiophene)s.[67] Andreani et al. reported higher yields of soluble poly(dialkylterthiophene)s in carbon tetrachloride rather than chloroform, which they attributed to the stability of the radical species in carbon tetrachloride.[68] Higher-quality catalyst, added at a slower rate and at reduced temperature, was shown to produce high molecular weight PATs with no insoluble polymer residue.[69] Laakso et al. used a factorial design to determine that increasing the ratio of catalyst to monomer increased the yield of poly(3-octylthiophene), and claimed that a longer polymerization time also increased the yield.[70] The mechanism of the oxidative polymerization using ferric chloride has been controversial. Sugimoto et al. did not speculate on a mechanism in their 1986 report.[61] In 1992, Niemi et al. proposed a radical mechanism, shown in Figure 6(top). They based their mechanism on two assumptions. First, since they observed polymerization only in solvents where the catalyst was either partially or completely insoluble (chloroform, toluene, carbon tetrachloride, pentane, and hexane, and not diethyl ether, xylene, acetone, or formic acid), they concluded that the active sites of the polymerization must be at the surface of solid ferric chloride. Therefore, they discounted the possibilities of either two radical cations reacting with each other, or two radicals reacting with each other, “because the chloride ions at the surface of the crystal would prevent the radical cations or radicals from assuming positions suitable for dimerization.”[71] Second, using 3-methylthiophene as a prototypical monomer, they performed quantum mechanical calculations to determine the energies and the total atomic charges on the carbon atoms of the four possible polymerization species (neutral 3-methylthiophene, the radical cation, the radical on carbon 2, and the radical on carbon 5). 3-methylthiophene Since the most negative carbon of the neutral 3-methylthiophene is also carbon 2, and the carbon with the highest odd electron population of the radical cation is carbon 2, they concluded that a radical cation mechanism would lead to mostly 2–2, H–H links. They then calculated the total energies of the species with the radicals at the 2 and the 5 carbons, and found that the latter was more stable by 1.5 kJ/mol. Therefore, the more stable radical could react with the neutral species, forming head-to-tail couplings as shown in Figure 6 (top). Andersson et al. offered an alternative mechanism in the course of their studies of the polymerization of 3-(4-octylphenyl)thiophene with ferric chloride, where they found a high degree of regioregularity when the catalyst was added to the monomer mixture slowly. They concluded that, given the selectivity of the couplings, and the strong oxidizing conditions, the reaction could proceed via a carbocation mechanism (Figure 6, middle).[32] The radical mechanism was directly challenged in a short communication in 1995, when Olinga and François noted that thiophene could be polymerized by ferric chloride in acetonitrile, a solvent in which the catalyst is completely soluble. Their analysis of the kinetics of thiophene polymerization also seemed to contradict the predictions of the radical polymerization mechanism.[72] Barbarella et al. studied the oligomerization of 3-(alkylsulfanyl)thiophenes, and concluded from their quantum mechanical calculations, and considerations of the enhanced stability of the radical cation when delocalized over a planar conjugated oligomer, that a radical cation mechanism analogous to that generally accepted for electrochemical polymerization was more likely (Figure 6, bottom).[73] Given the difficulties of studying a system with a heterogeneous, strongly oxidizing catalyst that produces difficult to characterize rigid-rod polymers, the mechanism of oxidative polymerization is by no means decided. However, the radical cation mechanism shown in Figure 6 is generally accepted as the most likely route for PT synthesis. # Applications A number of applications have been proposed for conducting PTs, including field-effect transistors,[74] electroluminescent devices, solar cells, photochemical resists, nonlinear optic devices,[75] batteries, and diodes. In general, there are two categories of applications for conducting polymers. Static applications rely upon the intrinsic conductivity of the materials, combined with their ease of processing and material properties common to polymeric materials. Dynamic applications utilize changes in the conductive and optical properties, resulting either from application of electric potentials or from environmental stimuli. As an example of a static application, H.C. Stark’s poly(3,4-ethylenedioxythiophene)-poly(styrene sulfonate) (PEDOT-PSS) product Baytron P (Figure 7) has been extensively used as an antistatic coating (as packaging materials for electronic components, for example). AGFA coats 200 m × 10 m of photographic film per year with Baytron because of its antistatic properties. The thin layer of Baytron is virtually transparent and colorless, prevents electrostatic discharges during film rewinding, and reduces dust buildup on the negatives after processing. PEDOT can also be used in dynamic applications where a potential is applied to a polymer film. The electrochromic properties of PEDOT are used to manufacture windows and mirrors which can become opaque or reflective upon the application of an electric potential.[27] Widespread adoption of electrochromic windows could save billions of dollars per year in air conditioning costs.[76] Finally, Phillips has commercialized a mobile phone with an electrically switchable PEDOT mirror (image). The use of PTs as sensors responding to an analyte has also been the subject of intense research. In addition to biosensor applications, PTs can also be functionalized with synthetic receptors for detecting metal ions or chiral molecules as well. PTs with pendant[77] and main-chain[28] crown ether functionalities were reported in 1993 by the research groups of Bäuerle and Swager, respectively (Figure 8). Electrochemically polymerized thin films of the Bäuerle pendant crown ether PT were exposed to millimolar concentrations of alkali cations (Li, Na, and K). The current which passed through the film at a fixed potential dropped dramatically in lithium ion solutions, less so for sodium ion solutions, and only slightly for potassium ion solutions. The Swager main chain crown ether PTs were prepared by chemical coupling and characterized by absorbance spectroscopy. Addition of the same alkali cations resulted in absorbance shifts of 46 nm (Li), 91 nm (Na), and 22 nm (K). The size of the shifts corresponds to the ion-binding preferences of the corresponding crown ether, resulting from a twist in the conjugated polymer backbone induced by ion binding. In the course of their studies of the optical properties of chiral PTs,[78][79][80][81] Yashima and Goto found that a PT with a chiral primary amine (Figure 9) was sensitive to chiral amino alcohols, producing mirror-image-split ICD responses in the π–transition region.[82] This was the first example of chiral recognition by PTs using a chiral detection method (CD spectroscopy). This distinguished it from earlier work by Lemaire et al. who used an achiral detection method (cyclic voltammetry) to detect incorporation of chiral dopant anions into an electrochemically polymerized chiral PT.[83] # Active Research Groups - Richard McCullough group, Carnegie Mellon. - Tobin Marks group, Northwestern. - John Reynolds group, University of Florida. - Timothy Swager group, MIT. - Ivan Oleynik group, University of South Florida. - Dhandapani Venkataraman group, University of Massachusetts, Amherst. - Gregory Sotzing's group, University of Connecticut, Storrs. - Jean Frechet's group, Jean Frechet, University of California, Berkeley. - Michael McGehee group, Stanford University. - Ron Noftle group, Wake Forest University Template:Incomplete-list # Further reading - Handbook of Conducting Polymers (Eds: T. A. Skotheim, R. L. Elsenbaumer, J. R. Reynolds), Marcel Dekker, New York 1998. ISBN 0-8247-0050-3. - G. Schopf, G. Koßmehl, Polythiophenes: Electrically Conductive Polymers, Springer, Berlin 1997. ISBN 3-540-61483-4; ISBN 0-387-61483-4. - Synthetic Metals (journal). ISSN 0379-6779.
https://www.wikidoc.org/index.php/Polythiophene
2c15140a37f1a159cfd1b0528e1528975551411d
wikidoc
Pores of Kohn
Pores of Kohn Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. The Pores of Kohn are pores between adjacent alveoli, or interalveolar connections. They function as a means of collateral ventilation; that is, if the lung is partially deflated, ventilation can occur to some extent through these pores. The pores also allow the passage of other materials such as fluid and bacteria. The Pores of Kohn take their name from the German physician Hans Kohn who first described them in 1893 .
Pores of Kohn Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us [1] to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. The Pores of Kohn are pores between adjacent alveoli, or interalveolar connections. They function as a means of collateral ventilation; that is, if the lung is partially deflated, ventilation can occur to some extent through these pores. The pores also allow the passage of other materials such as fluid and bacteria. The Pores of Kohn take their name from the German physician Hans Kohn [1866-1935] who first described them in 1893 [1].
https://www.wikidoc.org/index.php/Pores_of_Kohn
db197eaa09f30d52d594b48123a0fe1f50aa60c9
wikidoc
Porokeratosis
Porokeratosis # Overview Porokeratosis is a specific disorder of keratinization that is characterized histologically by the presence of a cornoid lamella, a thin column of closely stacked, parakeratotic cells extending through the stratum corneum with a thin or absent granular layer.:532 # Types Porokeratosis may be divided into the following clinical types: :532 - Plaque-type porokeratosis (also known as "Classic porokeratosis" and "Porokeratosis of Mibelli") is characterized by skin lesions that start as small, brownish papules that slowly enlarge to form irregular, annular, hyperkeratotic or verrucous plaques.:533:566 Sometimes they may show gross overgrowth and even horn-like structures may develop. Skin malignancy, although rare, is reported from all types of porokeratosis. Squamous cell carcinomae has been reported to develop in Mibelli's type porokeratosis over partianal areas involving anal mucosa. This was the first report mentioning mucosal malignancy in any form of porokeratosis. - Porokeratosis of mibelli. With permission from Dermatology Atlas. - Porokeratosis of mibelli. With permission from Dermatology Atlas. - Porokeratosis of mibelli. With permission from Dermatology Atlas. - Porokeratosis of mibelli. With permission from Dermatology Atlas. - Porokeratosis of mibelli. With permission from Dermatology Atlas. - Porokeratosis of mibelli. With permission from Dermatology Atlas. - Porokeratosis of mibelli. With permission from Dermatology Atlas. - Porokeratosis of mibelli. With permission from Dermatology Atlas. - Porokeratosis of mibelli. With permission from Dermatology Atlas. - Porokeratosis of mibelli. With permission from Dermatology Atlas. - Porokeratosis of mibelli. With permission from Dermatology Atlas. - Disseminated superficial porokeratosis is a more generalized processes and involves mainly the extremities in a bilateral, symmetric fashion.:533 In about 50% of cases, skin lesions only develop in sun-exposed areas, and this is referred to as disseminated superficial actinic porokeratosis:533 - Porokeratosis palmaris et plantaris disseminata is characterized by skin lesions that are superficial, small, relatively uniform, and demarcated by a distinct peripheral ridge of no more than 1mm in height.:534:567:1668 - Linear porokeratosis is characterized clinically skin lesions are identical to those of classic porokeratosis, including lichenoid papules, annular lesions, hyperkeratotic plaques with central atrophy, and the characteristic peripheral ridge.:567:1668 - Punctate porokeratosis is a skin condition associated with either classic porokeratosis or linear porokeratosis types of porokeratosis, and is characterized by multiple, minute, and discrete punctate, hyperkeratotic, seed-like skin lesions surrounded by a thin, raised margin on the palms and soles.:535:1668 - Porokeratosis plantaris discreta is a skin condition that occurs in adults, with a 4:1 female preponderance, characterized by a sharply marginated, rubbery, wide-based papules.:213 It is also known as "Steinberg's lesion". It was characterized in 1970. # Pathology Porokeratosis has a characteristic histomorphologic feature known as a cornoid lamella.
Porokeratosis Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]; Associate Editor(s)-in-Chief: Mugilan Poongkunran M.B.B.S [2] Kiran Singh, M.D. [3] # Overview Porokeratosis is a specific disorder of keratinization that is characterized histologically by the presence of a cornoid lamella, a thin column of closely stacked, parakeratotic cells extending through the stratum corneum with a thin or absent granular layer.[1]:532 # Types Porokeratosis may be divided into the following clinical types: [1]:532 - Plaque-type porokeratosis (also known as "Classic porokeratosis" and "Porokeratosis of Mibelli"[2]) is characterized by skin lesions that start as small, brownish papules that slowly enlarge to form irregular, annular, hyperkeratotic or verrucous plaques.[1]:533[3]:566 Sometimes they may show gross overgrowth and even horn-like structures may develop. [4] Skin malignancy, although rare, is reported from all types of porokeratosis. Squamous cell carcinomae has been reported to develop in Mibelli's type porokeratosis over partianal areas involving anal mucosa. This was the first report mentioning mucosal malignancy in any form of porokeratosis.[4] - Porokeratosis of mibelli. With permission from Dermatology Atlas.[5] - Porokeratosis of mibelli. With permission from Dermatology Atlas.[5] - Porokeratosis of mibelli. With permission from Dermatology Atlas.[5] - Porokeratosis of mibelli. With permission from Dermatology Atlas.[5] - Porokeratosis of mibelli. With permission from Dermatology Atlas.[5] - Porokeratosis of mibelli. With permission from Dermatology Atlas.[5] - Porokeratosis of mibelli. With permission from Dermatology Atlas.[5] - Porokeratosis of mibelli. With permission from Dermatology Atlas.[5] - Porokeratosis of mibelli. With permission from Dermatology Atlas.[5] - Porokeratosis of mibelli. With permission from Dermatology Atlas.[5] - Porokeratosis of mibelli. With permission from Dermatology Atlas.[5] - Disseminated superficial porokeratosis is a more generalized processes and involves mainly the extremities in a bilateral, symmetric fashion.[1]:533 In about 50% of cases, skin lesions only develop in sun-exposed areas, and this is referred to as disseminated superficial actinic porokeratosis[1]:533 - Porokeratosis palmaris et plantaris disseminata is characterized by skin lesions that are superficial, small, relatively uniform, and demarcated by a distinct peripheral ridge of no more than 1mm in height.[1]:534[3]:567[2]:1668 - Linear porokeratosis is characterized clinically skin lesions are identical to those of classic porokeratosis, including lichenoid papules, annular lesions, hyperkeratotic plaques with central atrophy, and the characteristic peripheral ridge.[1][3]:567[2]:1668 - Punctate porokeratosis is a skin condition associated with either classic porokeratosis or linear porokeratosis types of porokeratosis, and is characterized by multiple, minute, and discrete punctate, hyperkeratotic, seed-like skin lesions surrounded by a thin, raised margin on the palms and soles.[1]:535[2]:1668 - Porokeratosis plantaris discreta is a skin condition that occurs in adults, with a 4:1 female preponderance, characterized by a sharply marginated, rubbery, wide-based papules.[3]:213 It is also known as "Steinberg's lesion".[6] It was characterized in 1970.[7] # Pathology Porokeratosis has a characteristic histomorphologic feature known as a cornoid lamella.
https://www.wikidoc.org/index.php/Porokeratosis
ff9e792b80272b35aae71bdf3120c6c1fa8dc1fc
wikidoc
Pradofloxacin
Pradofloxacin # Overview Pradofloxacin (trade name Veraflox) is a 3rd generation enhanced spectrum veterinary antibiotic of the fluoroquinolone class. It was developed by Bayer HealthCare AG, Animal Health GmbH, and received approval from the European Commission in April 2011 for prescription-only use in veterinary medicine for the treatment of bacterial infections in dogs and cats. # History Pradofloxacin was first discovered by chemists at Bayer in 1994 and patented in 1998. The name pradofloxacin was issued in December 2000 by the World Health Organization. Following submission for marketing authorisation to the European Medicines Agency (EMA) in 2004, the application was refused in 2006, prompting further studies. Having reviewed the additional studies, the EMA Committee for Medicinal Products for Veterinary Use (CVMP) recommended granting marketing authorisation of pradofloxacin by consensus in February 2011. Marketing authorisation of pradofloxacin was granted by the European Commission in April 2011. # Mechanism of action The primary mode of action of fluoroquinolones involves interaction with enzymes essential for major DNA functions such as replication, transcription and recombination. The primary targets for pradofloxacin are the bacterial DNA gyrase and topoisomerase IV enzymes. Reversible association between pradofloxacin and DNA gyrase or DNA topoisomerase IV in the target bacteria results in inhibition of these enzymes and rapid death of the bacterial cell. The rapidity and extent of bacterial killing are directly proportional to the drug concentration. As a result, pradofloxacin is active against a wide range of Gram-positive and Gram-negative bacteria including anaerobic bacteria. # Indications As with all prescription veterinary medicine, advice on the use of pradofloxacin should always be sought from a suitably qualified veterinarian. ## Dogs Pradofloxacin was formerly indicated for the treatment of: - wound infections and superficial and deep pyoderma caused by susceptible strains of the Staphylococcus intermedius group (including S. pseudintermedius), - acute urinary tract infections caused by susceptible strains of Escherichia coli and the Staphylococcus intermedius group (including S. pseudintermedius) and - as adjunctive treatment to mechanical or surgical periodontal therapy in the treatment of severe infections of the gingiva and periodontal tissues caused by susceptible strains of anaerobic organisms, for example Porphyromonas spp. and Prevotella spp. but has since been shown to cause bone marrow suppression (resulting in severe neutropenia and thrombocytopenia) in dogs and is no longer recommended for use. ## Cats Pradofloxacin is indicated for the treatment of: - acute infections of the upper respiratory tract caused by susceptible strains of Pasteurella multocida, Escherichia coli and the Staphylococcus intermedius group (including S. pseudintermedius). - wound infections and abscesses caused by susceptible strains of Pasteurella multocida and the Staphylococcus intermedius group (including S. pseudintermedius) .
Pradofloxacin Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1] # Overview Pradofloxacin (trade name Veraflox) is a 3rd generation enhanced spectrum veterinary antibiotic of the fluoroquinolone class. It was developed by Bayer HealthCare AG, Animal Health GmbH, and received approval from the European Commission in April 2011 for prescription-only use in veterinary medicine for the treatment of bacterial infections in dogs and cats.[1] # History Pradofloxacin was first discovered by chemists at Bayer in 1994 and patented in 1998. The name pradofloxacin was issued in December 2000 by the World Health Organization. Following submission for marketing authorisation to the European Medicines Agency (EMA) in 2004, the application was refused in 2006,[2] prompting further studies. Having reviewed the additional studies, the EMA Committee for Medicinal Products for Veterinary Use (CVMP) recommended granting marketing authorisation of pradofloxacin by consensus in February 2011.[3] Marketing authorisation of pradofloxacin was granted by the European Commission in April 2011.[1] # Mechanism of action The primary mode of action of fluoroquinolones involves interaction with enzymes essential for major DNA functions such as replication, transcription and recombination. The primary targets for pradofloxacin are the bacterial DNA gyrase and topoisomerase IV enzymes. Reversible association between pradofloxacin and DNA gyrase or DNA topoisomerase IV in the target bacteria results in inhibition of these enzymes and rapid death of the bacterial cell. The rapidity and extent of bacterial killing are directly proportional to the drug concentration.[4] As a result, pradofloxacin is active against a wide range of Gram-positive and Gram-negative bacteria including anaerobic bacteria.[4] # Indications As with all prescription veterinary medicine, advice on the use of pradofloxacin should always be sought from a suitably qualified veterinarian.[4] ## Dogs Pradofloxacin was formerly indicated for the treatment of:[4] - wound infections and superficial and deep pyoderma caused by susceptible strains of the Staphylococcus intermedius group (including S. pseudintermedius), - acute urinary tract infections caused by susceptible strains of Escherichia coli and the Staphylococcus intermedius group (including S. pseudintermedius) and - as adjunctive treatment to mechanical or surgical periodontal therapy in the treatment of severe infections of the gingiva and periodontal tissues caused by susceptible strains of anaerobic organisms, for example Porphyromonas spp. and Prevotella spp. but has since been shown to cause bone marrow suppression (resulting in severe neutropenia and thrombocytopenia) in dogs and is no longer recommended for use. [5] ## Cats Pradofloxacin is indicated for the treatment of:[4] - acute infections of the upper respiratory tract caused by susceptible strains of Pasteurella multocida, Escherichia coli and the Staphylococcus intermedius group (including S. pseudintermedius). - wound infections and abscesses caused by susceptible strains of Pasteurella multocida and the Staphylococcus intermedius group (including S. pseudintermedius) [for oral suspension only].
https://www.wikidoc.org/index.php/Pradofloxacin
47f7ab67feccc321c8d42b8fc154dabaab8b7df1
wikidoc
Transthyretin
Transthyretin Transthyretin (TTR or TBPA) is a transport protein in the serum and cerebrospinal fluid that carries the thyroid hormone thyroxine (T4) and retinol-binding protein bound to retinol. This is how transthyretin gained its name: transports thyroxine and retinol. The liver secretes transthyretin into the blood, and the choroid plexus secretes TTR into the cerebrospinal fluid. TTR was originally called prealbumin (or thyroxine-binding prealbumin) because it ran faster than albumin on electrophoresis gels. # Binding Affinities It functions in concert with two other thyroid hormone-binding proteins in the serum: In cerebrospinal fluid TTR is the primary carrier of T4. TTR also acts as a carrier of retinol (vitamin A) through its association with retinol-binding protein (RBP) in the blood and the CSF. Less than 1% of TTR's T4 binding sites are occupied in blood, which is taken advantage of below to prevent TTRs dissociation, misfolding and aggregation which leads to the degeneration of post-mitotic tissue. Numerous other small molecules are known to bind in the thyroxine binding sites, including many natural products (such as resveratrol), drugs (Tafamidis, or Vyndaqel, diflunisal, flufenamic acid), and toxicants (PCB). # Structure TTR is a 55kDa homotetramer with a dimer of dimers quaternary structure that is synthesized in the liver, choroid plexus and retinal pigment epithelium for secretion into the bloodstream, cerebrospinal fluid and the eye, respectively. Each monomer is a 127-residue polypeptide rich in beta sheet structure. Association of two monomers via their edge beta-strands forms an extended beta sandwich. Further association of two of these dimers in a face-to-face fashion produces the homotetrameric structure and creates the two thyroxine binding sites per tetramer. This dimer-dimer interface, comprising the two T4 binding sites, is the weaker dimer-dimer interface and is the one that comes apart first in the process of tetramer dissociation. # Role in Disease TTR misfolding and aggregation is known to be associated with the amyloid diseases senile systemic amyloidosis (SSA), familial amyloid polyneuropathy (FAP), and familial amyloid cardiomyopathy (FAC). TTR tetramer dissociation is known to be rate-limiting for amyloid fibril formation. However, the monomer also must partially denature in order for TTR to be mis-assembly competent, leading to a variety of aggregate structures, including amyloid fibrils. While wild type TTR can dissociate, misfold, and aggregate, leading to SSA, point mutations within TTR are known to destabilize the tetramer composed of mutant and wild-type TTR subunits, facilitating more facile dissociation and/or misfolding and amyloidogenesis. A replacement of valine by methionine at position 30 (TTR V30M) is the mutation most commonly associated with FAP. A position 122 replacement of valine by isoleucine (TTR V122I) is carried by 3.9% of the African-American population, and is the most common cause of FAC. SSA is estimated to affect over 25% of the population over age 80. Severity of disease varies greatly by mutation, with some mutations causing disease in the first or second decade of life, and others being more benign. Deposition of TTR amyloid is generally observed extracellularly, although TTR deposits are also clearly observed within the cardiomyocytes of the heart. Treatment of familial TTR amyloid disease has historically relied on liver transplantation as a crude form of gene therapy. Because TTR is primarily produced in the liver, replacement of a liver containing a mutant TTR gene with a normal gene is able to reduce the mutant TTR levels in the body to < 5% of pretransplant levels. Certain mutations, however, cause CNS amyloidosis, and due to their production by the choroid plexus, the CNS TTR amyloid diseases do not respond to gene therapy mediated by liver transplantation. In 2011, the European Medicines Agency approved Tafamidis or Vyndaqel for the amelioration of FAP. Vyndaqel kinetically stabilizes the TTR tetramer, preventing tetramer dissociation required for TTR amyloidogenesis and degradation of the autonomic nervous system and/or the peripheral nervous system and/or the heart. TTR is also thought to have beneficial side effects, by binding to the infamous beta-amyloid protein, thereby preventing beta-amyloid's natural tendency to accumulate into the plaques associated with the early stages of Alzheimer's Disease. Preventing plaque formation is thought to enable a cell to rid itself of this otherwise toxic protein form and, thus, help prevent and maybe even treat the disease. There is now strong genetic and pharmacologic data (see European Medicines Agency website for the Tafamidis clinical trial results) indicating that the process of amyloid fibril formation leads to the degeneration of post-mitotic tissue causing FAP and likely FAC and SSA. Evidence points to the oligomers generated in the process of amyloidogenicity leading to the observed proteotoxicity. Transthyretin level in cerebrospinal fluid has also been found to be lower in patients with some neurobiological disorders such as schizophrenia. The reduced level of transthyretin in the CSF may indicate a lower thyroxine transport in brains of patients with schizophrenia. Because transthyretin is made in part by the choroid plexus, it can be used as an immunohistochemical marker for choroid plexus papillomas as well as carcinomas. As of March 2015, there are two ongoing clinical trials undergoing recruitment in the United States and worldwide to evaluate potential treatments for TTR Amyloidosis. # Nutritional Assessment In medicine, nutritional status can be assessed by measuring the concentration of transthyretin in the blood. In theory, other transport proteins such as albumin or transferrin could be used, but transthyretin is preferred because of its shorter half-life, although this means that its concentration more closely reflects recent dietary intake rather than overall nutritional status. Transthyretin concentration has been shown to be a good indicator of whether or not a malnourished patient will develop refeeding syndrome upon commencement of refeeding, via either the enteral, parenteral or oral routes. # Interactions Transthyretin has been shown to interact with Perlecan.
Transthyretin Transthyretin (TTR or TBPA) is a transport protein in the serum and cerebrospinal fluid that carries the thyroid hormone thyroxine (T4) and retinol-binding protein bound to retinol. This is how transthyretin gained its name: transports thyroxine and retinol. The liver secretes transthyretin into the blood, and the choroid plexus secretes TTR into the cerebrospinal fluid. TTR was originally called prealbumin[1] (or thyroxine-binding prealbumin) because it ran faster than albumin on electrophoresis gels. # Binding Affinities It functions in concert with two other thyroid hormone-binding proteins in the serum: In cerebrospinal fluid TTR is the primary carrier of T4. TTR also acts as a carrier of retinol (vitamin A) through its association with retinol-binding protein (RBP) in the blood and the CSF. Less than 1% of TTR's T4 binding sites are occupied in blood, which is taken advantage of below to prevent TTRs dissociation, misfolding and aggregation which leads to the degeneration of post-mitotic tissue. Numerous other small molecules are known to bind in the thyroxine binding sites, including many natural products (such as resveratrol), drugs (Tafamidis,[2] or Vyndaqel, diflunisal,[3][4][5] flufenamic acid),[6] and toxicants (PCB[7]). # Structure TTR is a 55kDa homotetramer with a dimer of dimers quaternary structure that is synthesized in the liver, choroid plexus and retinal pigment epithelium for secretion into the bloodstream, cerebrospinal fluid and the eye, respectively. Each monomer is a 127-residue polypeptide rich in beta sheet structure. Association of two monomers via their edge beta-strands forms an extended beta sandwich. Further association of two of these dimers in a face-to-face fashion produces the homotetrameric structure and creates the two thyroxine binding sites per tetramer. This dimer-dimer interface, comprising the two T4 binding sites, is the weaker dimer-dimer interface and is the one that comes apart first in the process of tetramer dissociation.[8] # Role in Disease TTR misfolding and aggregation is known to be associated with the amyloid diseases[9] senile systemic amyloidosis (SSA),[10] familial amyloid polyneuropathy (FAP),[11][12] and familial amyloid cardiomyopathy (FAC).[13] TTR tetramer dissociation is known to be rate-limiting for amyloid fibril formation.[14][15][16] However, the monomer also must partially denature in order for TTR to be mis-assembly competent, leading to a variety of aggregate structures, including amyloid fibrils.[17] While wild type TTR can dissociate, misfold, and aggregate, leading to SSA, point mutations within TTR are known to destabilize the tetramer composed of mutant and wild-type TTR subunits, facilitating more facile dissociation and/or misfolding and amyloidogenesis.[18] A replacement of valine by methionine at position 30 (TTR V30M) is the mutation most commonly associated with FAP.[19] A position 122 replacement of valine by isoleucine (TTR V122I) is carried by 3.9% of the African-American population, and is the most common cause of FAC.[13] SSA is estimated to affect over 25% of the population over age 80.[10] Severity of disease varies greatly by mutation, with some mutations causing disease in the first or second decade of life, and others being more benign. Deposition of TTR amyloid is generally observed extracellularly, although TTR deposits are also clearly observed within the cardiomyocytes of the heart. Treatment of familial TTR amyloid disease has historically relied on liver transplantation as a crude form of gene therapy.[20] Because TTR is primarily produced in the liver, replacement of a liver containing a mutant TTR gene with a normal gene is able to reduce the mutant TTR levels in the body to < 5% of pretransplant levels. Certain mutations, however, cause CNS amyloidosis, and due to their production by the choroid plexus, the CNS TTR amyloid diseases do not respond to gene therapy mediated by liver transplantation. In 2011, the European Medicines Agency approved Tafamidis or Vyndaqel[2] for the amelioration of FAP. Vyndaqel kinetically stabilizes the TTR tetramer, preventing tetramer dissociation required for TTR amyloidogenesis and degradation of the autonomic nervous system[21] and/or the peripheral nervous system and/or the heart.[16] TTR is also thought to have beneficial side effects, by binding to the infamous beta-amyloid protein, thereby preventing beta-amyloid's natural tendency to accumulate into the plaques associated with the early stages of Alzheimer's Disease. Preventing plaque formation is thought to enable a cell to rid itself of this otherwise toxic protein form and, thus, help prevent and maybe even treat the disease.[22] There is now strong genetic[23][24] and pharmacologic data (see European Medicines Agency website for the Tafamidis clinical trial results) indicating that the process of amyloid fibril formation leads to the degeneration of post-mitotic tissue causing FAP and likely FAC and SSA. Evidence points to the oligomers generated in the process of amyloidogenicity leading to the observed proteotoxicity.[25][26] Transthyretin level in cerebrospinal fluid has also been found to be lower in patients with some neurobiological disorders such as schizophrenia.[27] The reduced level of transthyretin in the CSF may indicate a lower thyroxine transport in brains of patients with schizophrenia. Because transthyretin is made in part by the choroid plexus, it can be used as an immunohistochemical marker for choroid plexus papillomas as well as carcinomas.[citation needed] As of March 2015, there are two ongoing clinical trials undergoing recruitment in the United States and worldwide to evaluate potential treatments for TTR Amyloidosis.[28] # Nutritional Assessment In medicine, nutritional status can be assessed by measuring the concentration of transthyretin in the blood. In theory, other transport proteins such as albumin or transferrin could be used, but transthyretin is preferred because of its shorter half-life, although this means that its concentration more closely reflects recent dietary intake rather than overall nutritional status.[29] Transthyretin concentration has been shown to be a good indicator of whether or not a malnourished patient will develop refeeding syndrome upon commencement of refeeding, via either the enteral, parenteral or oral routes.[30] # Interactions Transthyretin has been shown to interact with Perlecan.[31]
https://www.wikidoc.org/index.php/Prealbumin
add18165dc6cdcbf1588211534c7450b5c1fd06e
wikidoc
Prednicarbate
Prednicarbate # Disclaimer WikiDoc MAKES NO GUARANTEE OF VALIDITY. WikiDoc is not a professional health care provider, nor is it a suitable replacement for a licensed healthcare provider. WikiDoc is intended to be an educational tool, not a tool for any form of healthcare delivery. The educational content on WikiDoc drug pages is based upon the FDA package insert, National Library of Medicine content and practice guidelines / consensus statements. WikiDoc does not promote the administration of any medication or device that is not consistent with its labeling. Please read our full disclaimer here. # Overview Prednicarbate is an antiinflammatory that is FDA approved for the treatment of inflammatory and pruritic manifestations of corticosteroid responsive dermatoses. Common adverse reactions include pruritis, edema, paresthesia, urticaria, burning, allergic contact dermatitis and rash. # Adult Indications and Dosage ## FDA-Labeled Indications and Dosage (Adult) - Prednicarbate emollient cream 0.1% is a medium-potency corticosteroid indicated for the relief of the inflammatory and pruritic manifestations of corticosteroid responsive dermatoses. Prednicarbate emollient cream 0.1% may be used with caution in pediatric patients 1 year of age or older. The safety and efficacy of drug use for longer than 3 weeks in this population have not been established. Since safety and efficacy of prednicarbate emollient cream 0.1% have not been established in pediatric patients below 1 year of age, its use in this age group is not recommended. - Apply a thin film of prednicarbate emollient cream 0.1% to the affected skin areas twice daily. Rub in gently. - Prednicarbate emollient cream 0.1 % may be used in pediatric patients 1 year of age or older. Safety and efficacy of prednicarbate emollient cream 0.1% in pediatric patients for more than 3 weeks of use have not been established. Use in pediatric patients under 1 year of age is not recommended. - As with other corticosteroids, therapy should be discontinued when control is achieved. If no improvement is seen within 2 weeks, reassessment of the diagnosis may be necessary. - Prednicarbate emollient cream 0.1% should not be used with occlusive dressings unless directed by the physician. Prednicarbate emollient cream 0.1% should not be applied in the diaper area if the child still requires diapers or plastic pants as these garments may constitute occlusive dressing. ## Off-Label Use and Dosage (Adult) ### Guideline-Supported Use There is limited information regarding Off-Label Guideline-Supported Use of Prednicarbate in adult patients. ### Non–Guideline-Supported Use There is limited information regarding Off-Label Non–Guideline-Supported Use of Prednicarbate in adult patients. # Pediatric Indications and Dosage ## FDA-Labeled Indications and Dosage (Pediatric) - Prednicarbate emollient cream 0.1% may be used with caution in pediatric patients 1 year of age or older, although the safety and efficacy of drug use longer than 3 weeks have not been established. The use of prednicarbate emollient cream 0.1% is supported by results of a three-week, uncontrolled study in 59 pediatric patients between the ages of 4 months and 12 years of age with atopic dermatitis. None of the 59 pediatric patients showed evidence of HPA-axis suppression. Safety and efficacy of prednicarbate emollient cream 0.1% in pediatric patients below 1 year of age have not been established, therefore use in this age group is not recommended. Because of a higher ratio of skin surface area to body mass, pediatric patients are at a greater risk than adults of HPA-axis suppression and Cushing's syndrome when they are treated with topical corticosteroids. - They are therefore also at greater risk of adrenal insufficiency during and/or after withdrawal of treatment. In an uncontrolled study in pediatric patients with atopic dermatitis, the incidence of adverse reactions possibly or probably associated with the use of prednicarbate emollient cream 0.1% was limited. - Mild signs of atrophy developed in 5 patients (5/59, 8%) during the clinical trial, with 2 patients exhibiting more than one sign. Two patients (2/59, 3%) developed shininess, and two patients (2/59, 3%) developed thinness. Three patients (3/59, 5%) were observed with mild telangiectasia. It is unknown whether prior use of topical corticosterioids was a contributing factor in the development of telangiectasia in 2 of the patients. - Adverse effects including striae have also been reported with inappropriate use of topical corticosteroids in infants and children. Pediatric patients applying topical corticosteroids to greater than 20% of body surface are at higher risk for HPA-axis suppression. - HPA axis suppression, Cushing's syndrome, linear growth retardation, delayed weight gain and intracranial hypertension have been reported in children receiving topical corti-costeroids. Manifestations of adrenal suppression in children include low plasma cortisol levels, and absence of response to ACTH stimulation. Manifestations of intracranial hypertension include bulging fontanelles, headaches, and bilateral papilledema. - Prednicarbate emollient cream 0.1% should not be used in the treatment of diaper dermatitis. ## Off-Label Use and Dosage (Pediatric) ### Guideline-Supported Use There is limited information regarding Off-Label Guideline-Supported Use of Prednicarbate in pediatric patients. ### Non–Guideline-Supported Use There is limited information regarding Off-Label Non–Guideline-Supported Use of Prednicarbate in pediatric patients. # Contraindications - Prednicarbate emollient cream 0.1% is contraindicated in those patients with a history of hypersensitivity to any of the components in the preparations. # Warnings - Systemic absorption of topical corticosteroids can produce reversible hypothalamic-pituitary-adrenal (HPA) axis suppression with the potential for glucocorticosteroid insufficiency after withdrawal of treatment. - Manifestations of Cushing's syndrome, hyperglycemia, and glucosuria can also be produced in some patients by systemic absorption of topical corticosteroids while on treatment. - Patients applying a topical steroid to a large surface area or under occlusion should be evaluated periodically for evidence of HPA-axis suppression. This may be done by using the ACTH stimulation, A.M. plasma cortisol, and urinary free cortisol tests. - Prednicarbate emollient cream 0.1% did not produce significant HPA-axis suppression when used at a dose of 30g/day for a week in 10 adult patients with extensive psoriasis or atopic dermatitis. Prednicarbate emollient cream 0.1% did not produce HPA-axis suppression in any of 59 pediatric patients with extensive atopic dermatitis when applied BID for 3 weeks to > 20% of the body surface. - If HPA-axis suppression is noted, an attempt should be made to withdraw the drug, to reduce the frequency of the application, or to substitute a less potent corticosteroid. Recovery of HPA-axis function is generally prompt upon discontinuation of topical corticosteroids. Infrequently, signs and symptoms of glucocorticosteroid insufficiency may occur, requiring supplemental systemic corticosteroids. For information on systemic supplementation, see prescribing information for those products. - Pediatric patients may be more susceptible to systemic toxicity from equivalent doses due to their larger skin surface to body mass ratios. - If irritation develops, prednicarbate emollient cream 0.1% should be discontinued and appropriate therapy instituted. Allergic contact dermatitis with corticosteroids is usually diagnosed by observing a failure to heal rather than noting a clinical exacerbation, as observed with most topical products not containing corticosteroids. Such an observation should be corroborated with appropriate diagnostic patch testing. - If concomitant skin infections are present or develop, an appropriate antifungal or antibacterial agent should be used. - If a favorable response does not occur promptly, use of prednicarbate emollient cream 0.1% should be discontinued until the infection has been adequately controlled. # Adverse Reactions ## Clinical Trials Experience There is limited information regarding Clinical Trial Experience of Prednicarbate in the drug label. ## Postmarketing Experience - In controlled adult clinical studies, the incidence of adverse reactions probably or possibly associated with the use of prednicarbate emollient cream 0.1% was approximately 4%. Reported reactions included mild signs of skin atrophy in 1% of treated patients, as well as the following reactions which were reported in less than 1% of patients: pruritis, edema, paresthesia, urticaria, burning, allergic contact dermatitis and rash. - In an uncontrolled study in pediatric patients with atopic dermatitis, the incidence of adverse reactions possibly or probably associated with the use of prednicarbate emollient cream 0.1 % was limited. Mild signs of atrophy developed in 5 patients (5/59, 8%) during the clinical trial, with 2 patients exhibiting more than one sign. Two patients (2/59, 3%) developed shininess, and 2 patients (2/59, 3%) developed thinness. Three patients (3/59, 5 %) were observed with mild telangiectasia. It is unknown whether prior use of topical corticosteroids was a contributing factor in the development of telangiectasia in 2 of the patients. - The following additional local adverse reactions have been reported infrequently with topical corticosteroids, but may occur more frequently with the use of occlusive dressings. These reactions are listed in an approximate decreasing order of occurrence: folliculitis, acneiform eruptions, hypopigmentation, perioral dermatitis, secondary infection, striae and miliaria. # Drug Interactions There is limited information regarding Prednicarbate Drug Interactions in the drug label. # Use in Specific Populations ### Pregnancy Pregnancy Category (FDA): - Corticosteroids have been shown to be teratogenic in laboratory animals when administered systemically at relatively low dosage levels. Some corticosteroids have been shown to be teratogenic after dermal application in laboratory animals. - Prednicarbate has been shown to be teratogenic and embryotoxic in Wistar rats and Himalayan rabbits when given subcutaneously during gestation at doses 1900 times and 45 times the recommended topical human dose, assuming a percutaneous absorption of approximately 3%. In the rats, slightly retarded fetal development and an incidence of thickened and wavy ribs higher than the spontaneous rate were noted. - In rabbits, increased liver weights and slight increase in the fetal intrauterine death rate were observed. The fetuses that were delivered exhibited reduced placental weight, increased frequency of cleft palate, ossification disorders in the sternum, omphalocele, and anomalous posture of the forelimbs. - There are no adequate and well-controlled studies in pregnant women on teratogenic effects of prednicarbate. Prednicarbate emollient cream 0.1% should be used during pregnancy only if the potential benefit justifies the potential risk to the fetus. Pregnancy Category (AUS): - Australian Drug Evaluation Committee (ADEC) Pregnancy Category There is no Australian Drug Evaluation Committee (ADEC) guidance on usage of Prednicarbate in women who are pregnant. ### Labor and Delivery There is no FDA guidance on use of Prednicarbate during labor and delivery. ### Nursing Mothers - Systemically administered corticosteroids appear in human milk and could suppress growth, interfere with endogenous corticosteroid production, or cause other untoward effects. It is not known whether topical administration of corticosteroids could result in sufficient systemic absorption to produce detectable quantities in human milk. Because many drugs are excreted in human milk, caution should be exercised when prednicarbate emollient cream 0.1% is administered to a nursing woman. ### Pediatric Use - Prednicarbate emollient cream 0.1% may be used with caution in pediatric patients 1 year of age or older, although the safety and efficacy of drug use longer than 3 weeks have not been established. The use of prednicarbate emollient cream 0.1% is supported by results of a three-week, uncontrolled study in 59 pediatric patients between the ages of 4 months and 12 years of age with atopic dermatitis. None of the 59 pediatric patients showed evidence of HPA-axis suppression. Safety and efficacy of prednicarbate emollient cream 0.1% in pediatric patients below 1 year of age have not been established, therefore use in this age group is not recommended. Because of a higher ratio of skin surface area to body mass, pediatric patients are at a greater risk than adults of HPA-axis suppression and Cushing's syndrome when they are treated with topical corticosteroids. They are therefore also at greater risk of adrenal insufficiency during and/or after withdrawal of treatment. In an uncontrolled study in pediatric patients with atopic dermatitis, the incidence of adverse reactions possibly or probably associated with the use of prednicarbate emollient cream 0.1% was limited. - Mild signs of atrophy developed in 5 patients (5/59, 8%) during the clinical trial, with 2 patients exhibiting more than one sign. Two patients (2/59, 3%) developed shininess, and two patients (2/59, 3%) developed thinness. Three patients (3/59, 5%) were observed with mild telangiectasia. It is unknown whether prior use of topical corticosterioids was a contributing factor in the development of telangiectasia in 2 of the patients. Adverse effects including striae have also been reported with inappropriate use of topical corticosteroids in infants and children. Pediatric patients applying topical corticosteroids to greater than 20% of body surface are at higher risk for HPA-axis suppression. - HPA axis suppression, Cushing's syndrome, linear growth retardation, delayed weight gain and intracranial hypertension have been reported in children receiving topical corti-costeroids. Manifestations of adrenal suppression in children include low plasma cortisol levels, and absence of response to ACTH stimulation. Manifestations of intracranial hypertension include bulging fontanelles, headaches, and bilateral papilledema. - Prednicarbate emollient cream 0.1% should not be used in the treatment of diaper dermatitis. ### Geriatic Use There is no FDA guidance on the use of Prednicarbate with respect to geriatric patients. ### Gender There is no FDA guidance on the use of Prednicarbate with respect to specific gender populations. ### Race There is no FDA guidance on the use of Prednicarbate with respect to specific racial populations. ### Renal Impairment There is no FDA guidance on the use of Prednicarbate in patients with renal impairment. ### Hepatic Impairment There is no FDA guidance on the use of Prednicarbate in patients with hepatic impairment. ### Females of Reproductive Potential and Males There is no FDA guidance on the use of Prednicarbate in women of reproductive potentials and males. ### Immunocompromised Patients There is no FDA guidance one the use of Prednicarbate in patients who are immunocompromised. # Administration and Monitoring ### Administration - Topical ### Monitoring There is limited information regarding Monitoring of Prednicarbate in the drug label. # IV Compatibility There is limited information regarding IV Compatibility of Prednicarbate in the drug label. # Overdosage - Topically applied corticosteroids can be absorbed in sufficient amounts to produce systemic effects. # Pharmacology ## Mechanism of Action - In common with other topical corticosteroids, prednicarbate has anti-inflammatory, antipruritic, and vasoconstrictive properties. In general, the mechanism of the anti-inflammatory activity of topical steroids is unclear. - However, corticosteroids are thought to act by the induction of phospholipase A2 inhibitory proteins, collectively called lipocortins. It is postulated that these proteins control the biosynthesis of potent mediators of inflammation such as prostaglandins and leukotrienes by inhibiting the release of their common precursor arachidonic acid. Arachidonic acid is released from membrane phospholipids by phospholipase A2. ## Structure - Prednicarbate emollient cream 0.1% contains prednicarbate, a synthetic corticosteroid for topical dermatologic use. The chemical name of prednicarbate is 11β, 17, 21-trihydroxypregna-1,4-diene-3,20-dione 17-(ethyl carbonate) 21-propionate. Prednicarbate has the empirical formula C27H36O8 and a molecular weight of 488.58. Topical corticosteroids constitute a class of primarily synthetic steroids used topically as anti-inflammatory and antipruritic agents. - The CAS Registry Number is 73771-04-7. The chemical structure is: - Prednicarbate is a practically odorless white to yellow-white powder insoluble to practically insoluble in water and freely soluble in ethanol. - Each gram of prednicarbate emollient cream 0.1% contains 1.0 mg of prednicarbate in a base consisting of white petrolatum USP, purified water USP, isopropyl myristate NF, lanolin alcohols NF, mineral oil USP, cetostearyl alcohol NF, aluminum stearate, edetate disodium USP, lactic acid USP, and magnesium stearate DAB 9. ## Pharmacodynamics There is limited information regarding Pharmacodynamics of Prednicarbate in the drug label. ## Pharmacokinetics - The extent of percutaneous absorption of topical corticosteroids is determined by many factors, including the vehicle and the integrity of the epidermal barrier. Use of occlusive dressings with hydrocortisone for up to 24 hours have not been shown to increase penetration; however, occlusion of hydrocortisone for 96 hours does markedly enhance penetration. - Topical corticosteroids can be absorbed from normal intact skin. Inflammation and/or other disease processes in the skin increase percutaneous absorption. - Studies performed with prednicarbate emollient cream 0.1 % indicate that the drug product is in the medium range of potency compared with other topical corticosteroids. ## Nonclinical Toxicology There is limited information regarding Nonclinical Toxicology of Prednicarbate in the drug label. # Clinical Studies There is limited information regarding Clinical Studies of Prednicarbate in the drug label. # How Supplied ## Storage There is limited information regarding Prednicarbate Storage in the drug label. # Images ## Drug Images ## Package and Label Display Panel # Patient Counseling Information - Patients using topical corticosteroids should receive the following information and instructions: - This medication is to be used as directed by the physician. It is for external use only. Avoid contact with the eyes. - This medication should not be used for any disorder other than that for which it was prescribed. - The treated skin area should not be bandaged, otherwise covered or wrapped so as to be occlusive, unless directed by the physician. - Patients should report to their physician any signs of local adverse reactions. - Parents of pediatric patients should be advised not to use this medication in the treatment of diaper dermatitis. This medication should not be applied in the diaper area as diapers or plastic pants may constitute occlusive dressing. - This medication should not be used on the face, underarms, or groin areas. - Contact between prednicarbate emollient cream 0.1% and latex containing products (eg. condoms, diaphragm etc.) should be avoided since paraffin in contact with latex can cause damage and reduce the effectiveness of any latex containing products. If latex products come into contact with prednicarbate emollient cream 0.1%, patients should be advised to discard the latex products. Patients should be advised that this medication is to be used externally only, not intravaginally. - As with other corticosteroids, therapy should be discontinued when control is achieved. If no improvement is seen within two weeks, contact the physician. # Precautions with Alcohol - Alcohol-Prednicarbate interaction has not been established. Talk to your doctor about the effects of taking alcohol with this medication. # Brand Names - PREDNICARBATE ® # Look-Alike Drug Names There is limited information regarding Prednicarbate Look-Alike Drug Names in the drug label. # Drug Shortage Status # Price
Prednicarbate Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]; Associate Editor(s)-in-Chief: Ammu Susheela, M.D. [2] # Disclaimer WikiDoc MAKES NO GUARANTEE OF VALIDITY. WikiDoc is not a professional health care provider, nor is it a suitable replacement for a licensed healthcare provider. WikiDoc is intended to be an educational tool, not a tool for any form of healthcare delivery. The educational content on WikiDoc drug pages is based upon the FDA package insert, National Library of Medicine content and practice guidelines / consensus statements. WikiDoc does not promote the administration of any medication or device that is not consistent with its labeling. Please read our full disclaimer here. # Overview Prednicarbate is an antiinflammatory that is FDA approved for the treatment of inflammatory and pruritic manifestations of corticosteroid responsive dermatoses. Common adverse reactions include pruritis, edema, paresthesia, urticaria, burning, allergic contact dermatitis and rash. # Adult Indications and Dosage ## FDA-Labeled Indications and Dosage (Adult) - Prednicarbate emollient cream 0.1% is a medium-potency corticosteroid indicated for the relief of the inflammatory and pruritic manifestations of corticosteroid responsive dermatoses. Prednicarbate emollient cream 0.1% may be used with caution in pediatric patients 1 year of age or older. The safety and efficacy of drug use for longer than 3 weeks in this population have not been established. Since safety and efficacy of prednicarbate emollient cream 0.1% have not been established in pediatric patients below 1 year of age, its use in this age group is not recommended. - Apply a thin film of prednicarbate emollient cream 0.1% to the affected skin areas twice daily. Rub in gently. - Prednicarbate emollient cream 0.1 % may be used in pediatric patients 1 year of age or older. Safety and efficacy of prednicarbate emollient cream 0.1% in pediatric patients for more than 3 weeks of use have not been established. Use in pediatric patients under 1 year of age is not recommended. - As with other corticosteroids, therapy should be discontinued when control is achieved. If no improvement is seen within 2 weeks, reassessment of the diagnosis may be necessary. - Prednicarbate emollient cream 0.1% should not be used with occlusive dressings unless directed by the physician. Prednicarbate emollient cream 0.1% should not be applied in the diaper area if the child still requires diapers or plastic pants as these garments may constitute occlusive dressing. ## Off-Label Use and Dosage (Adult) ### Guideline-Supported Use There is limited information regarding Off-Label Guideline-Supported Use of Prednicarbate in adult patients. ### Non–Guideline-Supported Use There is limited information regarding Off-Label Non–Guideline-Supported Use of Prednicarbate in adult patients. # Pediatric Indications and Dosage ## FDA-Labeled Indications and Dosage (Pediatric) - Prednicarbate emollient cream 0.1% may be used with caution in pediatric patients 1 year of age or older, although the safety and efficacy of drug use longer than 3 weeks have not been established. The use of prednicarbate emollient cream 0.1% is supported by results of a three-week, uncontrolled study in 59 pediatric patients between the ages of 4 months and 12 years of age with atopic dermatitis. None of the 59 pediatric patients showed evidence of HPA-axis suppression. Safety and efficacy of prednicarbate emollient cream 0.1% in pediatric patients below 1 year of age have not been established, therefore use in this age group is not recommended. Because of a higher ratio of skin surface area to body mass, pediatric patients are at a greater risk than adults of HPA-axis suppression and Cushing's syndrome when they are treated with topical corticosteroids. - They are therefore also at greater risk of adrenal insufficiency during and/or after withdrawal of treatment. In an uncontrolled study in pediatric patients with atopic dermatitis, the incidence of adverse reactions possibly or probably associated with the use of prednicarbate emollient cream 0.1% was limited. - Mild signs of atrophy developed in 5 patients (5/59, 8%) during the clinical trial, with 2 patients exhibiting more than one sign. Two patients (2/59, 3%) developed shininess, and two patients (2/59, 3%) developed thinness. Three patients (3/59, 5%) were observed with mild telangiectasia. It is unknown whether prior use of topical corticosterioids was a contributing factor in the development of telangiectasia in 2 of the patients. - Adverse effects including striae have also been reported with inappropriate use of topical corticosteroids in infants and children. Pediatric patients applying topical corticosteroids to greater than 20% of body surface are at higher risk for HPA-axis suppression. - HPA axis suppression, Cushing's syndrome, linear growth retardation, delayed weight gain and intracranial hypertension have been reported in children receiving topical corti-costeroids. Manifestations of adrenal suppression in children include low plasma cortisol levels, and absence of response to ACTH stimulation. Manifestations of intracranial hypertension include bulging fontanelles, headaches, and bilateral papilledema. - Prednicarbate emollient cream 0.1% should not be used in the treatment of diaper dermatitis. ## Off-Label Use and Dosage (Pediatric) ### Guideline-Supported Use There is limited information regarding Off-Label Guideline-Supported Use of Prednicarbate in pediatric patients. ### Non–Guideline-Supported Use There is limited information regarding Off-Label Non–Guideline-Supported Use of Prednicarbate in pediatric patients. # Contraindications - Prednicarbate emollient cream 0.1% is contraindicated in those patients with a history of hypersensitivity to any of the components in the preparations. # Warnings - Systemic absorption of topical corticosteroids can produce reversible hypothalamic-pituitary-adrenal (HPA) axis suppression with the potential for glucocorticosteroid insufficiency after withdrawal of treatment. - Manifestations of Cushing's syndrome, hyperglycemia, and glucosuria can also be produced in some patients by systemic absorption of topical corticosteroids while on treatment. - Patients applying a topical steroid to a large surface area or under occlusion should be evaluated periodically for evidence of HPA-axis suppression. This may be done by using the ACTH stimulation, A.M. plasma cortisol, and urinary free cortisol tests. - Prednicarbate emollient cream 0.1% did not produce significant HPA-axis suppression when used at a dose of 30g/day for a week in 10 adult patients with extensive psoriasis or atopic dermatitis. Prednicarbate emollient cream 0.1% did not produce HPA-axis suppression in any of 59 pediatric patients with extensive atopic dermatitis when applied BID for 3 weeks to > 20% of the body surface. - If HPA-axis suppression is noted, an attempt should be made to withdraw the drug, to reduce the frequency of the application, or to substitute a less potent corticosteroid. Recovery of HPA-axis function is generally prompt upon discontinuation of topical corticosteroids. Infrequently, signs and symptoms of glucocorticosteroid insufficiency may occur, requiring supplemental systemic corticosteroids. For information on systemic supplementation, see prescribing information for those products. - Pediatric patients may be more susceptible to systemic toxicity from equivalent doses due to their larger skin surface to body mass ratios. - If irritation develops, prednicarbate emollient cream 0.1% should be discontinued and appropriate therapy instituted. Allergic contact dermatitis with corticosteroids is usually diagnosed by observing a failure to heal rather than noting a clinical exacerbation, as observed with most topical products not containing corticosteroids. Such an observation should be corroborated with appropriate diagnostic patch testing. - If concomitant skin infections are present or develop, an appropriate antifungal or antibacterial agent should be used. - If a favorable response does not occur promptly, use of prednicarbate emollient cream 0.1% should be discontinued until the infection has been adequately controlled. # Adverse Reactions ## Clinical Trials Experience There is limited information regarding Clinical Trial Experience of Prednicarbate in the drug label. ## Postmarketing Experience - In controlled adult clinical studies, the incidence of adverse reactions probably or possibly associated with the use of prednicarbate emollient cream 0.1% was approximately 4%. Reported reactions included mild signs of skin atrophy in 1% of treated patients, as well as the following reactions which were reported in less than 1% of patients: pruritis, edema, paresthesia, urticaria, burning, allergic contact dermatitis and rash. - In an uncontrolled study in pediatric patients with atopic dermatitis, the incidence of adverse reactions possibly or probably associated with the use of prednicarbate emollient cream 0.1 % was limited. Mild signs of atrophy developed in 5 patients (5/59, 8%) during the clinical trial, with 2 patients exhibiting more than one sign. Two patients (2/59, 3%) developed shininess, and 2 patients (2/59, 3%) developed thinness. Three patients (3/59, 5 %) were observed with mild telangiectasia. It is unknown whether prior use of topical corticosteroids was a contributing factor in the development of telangiectasia in 2 of the patients. - The following additional local adverse reactions have been reported infrequently with topical corticosteroids, but may occur more frequently with the use of occlusive dressings. These reactions are listed in an approximate decreasing order of occurrence: folliculitis, acneiform eruptions, hypopigmentation, perioral dermatitis, secondary infection, striae and miliaria. # Drug Interactions There is limited information regarding Prednicarbate Drug Interactions in the drug label. # Use in Specific Populations ### Pregnancy Pregnancy Category (FDA): - Corticosteroids have been shown to be teratogenic in laboratory animals when administered systemically at relatively low dosage levels. Some corticosteroids have been shown to be teratogenic after dermal application in laboratory animals. - Prednicarbate has been shown to be teratogenic and embryotoxic in Wistar rats and Himalayan rabbits when given subcutaneously during gestation at doses 1900 times and 45 times the recommended topical human dose, assuming a percutaneous absorption of approximately 3%. In the rats, slightly retarded fetal development and an incidence of thickened and wavy ribs higher than the spontaneous rate were noted. - In rabbits, increased liver weights and slight increase in the fetal intrauterine death rate were observed. The fetuses that were delivered exhibited reduced placental weight, increased frequency of cleft palate, ossification disorders in the sternum, omphalocele, and anomalous posture of the forelimbs. - There are no adequate and well-controlled studies in pregnant women on teratogenic effects of prednicarbate. Prednicarbate emollient cream 0.1% should be used during pregnancy only if the potential benefit justifies the potential risk to the fetus. Pregnancy Category (AUS): - Australian Drug Evaluation Committee (ADEC) Pregnancy Category There is no Australian Drug Evaluation Committee (ADEC) guidance on usage of Prednicarbate in women who are pregnant. ### Labor and Delivery There is no FDA guidance on use of Prednicarbate during labor and delivery. ### Nursing Mothers - Systemically administered corticosteroids appear in human milk and could suppress growth, interfere with endogenous corticosteroid production, or cause other untoward effects. It is not known whether topical administration of corticosteroids could result in sufficient systemic absorption to produce detectable quantities in human milk. Because many drugs are excreted in human milk, caution should be exercised when prednicarbate emollient cream 0.1% is administered to a nursing woman. ### Pediatric Use - Prednicarbate emollient cream 0.1% may be used with caution in pediatric patients 1 year of age or older, although the safety and efficacy of drug use longer than 3 weeks have not been established. The use of prednicarbate emollient cream 0.1% is supported by results of a three-week, uncontrolled study in 59 pediatric patients between the ages of 4 months and 12 years of age with atopic dermatitis. None of the 59 pediatric patients showed evidence of HPA-axis suppression. Safety and efficacy of prednicarbate emollient cream 0.1% in pediatric patients below 1 year of age have not been established, therefore use in this age group is not recommended. Because of a higher ratio of skin surface area to body mass, pediatric patients are at a greater risk than adults of HPA-axis suppression and Cushing's syndrome when they are treated with topical corticosteroids. They are therefore also at greater risk of adrenal insufficiency during and/or after withdrawal of treatment. In an uncontrolled study in pediatric patients with atopic dermatitis, the incidence of adverse reactions possibly or probably associated with the use of prednicarbate emollient cream 0.1% was limited. - Mild signs of atrophy developed in 5 patients (5/59, 8%) during the clinical trial, with 2 patients exhibiting more than one sign. Two patients (2/59, 3%) developed shininess, and two patients (2/59, 3%) developed thinness. Three patients (3/59, 5%) were observed with mild telangiectasia. It is unknown whether prior use of topical corticosterioids was a contributing factor in the development of telangiectasia in 2 of the patients. Adverse effects including striae have also been reported with inappropriate use of topical corticosteroids in infants and children. Pediatric patients applying topical corticosteroids to greater than 20% of body surface are at higher risk for HPA-axis suppression. - HPA axis suppression, Cushing's syndrome, linear growth retardation, delayed weight gain and intracranial hypertension have been reported in children receiving topical corti-costeroids. Manifestations of adrenal suppression in children include low plasma cortisol levels, and absence of response to ACTH stimulation. Manifestations of intracranial hypertension include bulging fontanelles, headaches, and bilateral papilledema. - Prednicarbate emollient cream 0.1% should not be used in the treatment of diaper dermatitis. ### Geriatic Use There is no FDA guidance on the use of Prednicarbate with respect to geriatric patients. ### Gender There is no FDA guidance on the use of Prednicarbate with respect to specific gender populations. ### Race There is no FDA guidance on the use of Prednicarbate with respect to specific racial populations. ### Renal Impairment There is no FDA guidance on the use of Prednicarbate in patients with renal impairment. ### Hepatic Impairment There is no FDA guidance on the use of Prednicarbate in patients with hepatic impairment. ### Females of Reproductive Potential and Males There is no FDA guidance on the use of Prednicarbate in women of reproductive potentials and males. ### Immunocompromised Patients There is no FDA guidance one the use of Prednicarbate in patients who are immunocompromised. # Administration and Monitoring ### Administration - Topical ### Monitoring There is limited information regarding Monitoring of Prednicarbate in the drug label. # IV Compatibility There is limited information regarding IV Compatibility of Prednicarbate in the drug label. # Overdosage - Topically applied corticosteroids can be absorbed in sufficient amounts to produce systemic effects. # Pharmacology ## Mechanism of Action - In common with other topical corticosteroids, prednicarbate has anti-inflammatory, antipruritic, and vasoconstrictive properties. In general, the mechanism of the anti-inflammatory activity of topical steroids is unclear. - However, corticosteroids are thought to act by the induction of phospholipase A2 inhibitory proteins, collectively called lipocortins. It is postulated that these proteins control the biosynthesis of potent mediators of inflammation such as prostaglandins and leukotrienes by inhibiting the release of their common precursor arachidonic acid. Arachidonic acid is released from membrane phospholipids by phospholipase A2. ## Structure - Prednicarbate emollient cream 0.1% contains prednicarbate, a synthetic corticosteroid for topical dermatologic use. The chemical name of prednicarbate is 11β, 17, 21-trihydroxypregna-1,4-diene-3,20-dione 17-(ethyl carbonate) 21-propionate. Prednicarbate has the empirical formula C27H36O8 and a molecular weight of 488.58. Topical corticosteroids constitute a class of primarily synthetic steroids used topically as anti-inflammatory and antipruritic agents. - The CAS Registry Number is 73771-04-7. The chemical structure is: - Prednicarbate is a practically odorless white to yellow-white powder insoluble to practically insoluble in water and freely soluble in ethanol. - Each gram of prednicarbate emollient cream 0.1% contains 1.0 mg of prednicarbate in a base consisting of white petrolatum USP, purified water USP, isopropyl myristate NF, lanolin alcohols NF, mineral oil USP, cetostearyl alcohol NF, aluminum stearate, edetate disodium USP, lactic acid USP, and magnesium stearate DAB 9. ## Pharmacodynamics There is limited information regarding Pharmacodynamics of Prednicarbate in the drug label. ## Pharmacokinetics - The extent of percutaneous absorption of topical corticosteroids is determined by many factors, including the vehicle and the integrity of the epidermal barrier. Use of occlusive dressings with hydrocortisone for up to 24 hours have not been shown to increase penetration; however, occlusion of hydrocortisone for 96 hours does markedly enhance penetration. - Topical corticosteroids can be absorbed from normal intact skin. Inflammation and/or other disease processes in the skin increase percutaneous absorption. - Studies performed with prednicarbate emollient cream 0.1 % indicate that the drug product is in the medium range of potency compared with other topical corticosteroids. ## Nonclinical Toxicology There is limited information regarding Nonclinical Toxicology of Prednicarbate in the drug label. # Clinical Studies There is limited information regarding Clinical Studies of Prednicarbate in the drug label. # How Supplied - ## Storage There is limited information regarding Prednicarbate Storage in the drug label. # Images ## Drug Images ## Package and Label Display Panel # Patient Counseling Information - Patients using topical corticosteroids should receive the following information and instructions: - This medication is to be used as directed by the physician. It is for external use only. Avoid contact with the eyes. - This medication should not be used for any disorder other than that for which it was prescribed. - The treated skin area should not be bandaged, otherwise covered or wrapped so as to be occlusive, unless directed by the physician. - Patients should report to their physician any signs of local adverse reactions. - Parents of pediatric patients should be advised not to use this medication in the treatment of diaper dermatitis. This medication should not be applied in the diaper area as diapers or plastic pants may constitute occlusive dressing. - This medication should not be used on the face, underarms, or groin areas. - Contact between prednicarbate emollient cream 0.1% and latex containing products (eg. condoms, diaphragm etc.) should be avoided since paraffin in contact with latex can cause damage and reduce the effectiveness of any latex containing products. If latex products come into contact with prednicarbate emollient cream 0.1%, patients should be advised to discard the latex products. Patients should be advised that this medication is to be used externally only, not intravaginally. - As with other corticosteroids, therapy should be discontinued when control is achieved. If no improvement is seen within two weeks, contact the physician. # Precautions with Alcohol - Alcohol-Prednicarbate interaction has not been established. Talk to your doctor about the effects of taking alcohol with this medication. # Brand Names - PREDNICARBATE ®[1] # Look-Alike Drug Names There is limited information regarding Prednicarbate Look-Alike Drug Names in the drug label. # Drug Shortage Status # Price
https://www.wikidoc.org/index.php/Prednicarbate
017269bcf1514c634ceec2017340a29e164d20ac
wikidoc
Premedication
Premedication Please Join in Editing This Page and Apply to be an Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. # Overview Premedication refers to a drug treatment given to a patient before a (surgical or invasive) medical procedure. These drugs are typically sedative or analgesic. Premedication before chemotherapy for cancer often refers to special drug regimens (usually 3 drugs, eg dexamethasone, diphenhydramine and omeprazole) given to a patient hours or minutes before the chemotherapy to avert hypersensitivity reactions (allergic reactions). de:Prämedikation
Premedication Editor-in-Chief: Santosh Patel M.D., FRCA [1] Please Join in Editing This Page and Apply to be an Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us [2] to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. # Overview Premedication refers to a drug treatment given to a patient before a (surgical or invasive) medical procedure. These drugs are typically sedative or analgesic. Premedication before chemotherapy for cancer often refers to special drug regimens (usually 3 drugs, eg dexamethasone, diphenhydramine and omeprazole) given to a patient hours or minutes before the chemotherapy to avert hypersensitivity reactions (allergic reactions). Template:SIB de:Prämedikation Template:WH Template:WS
https://www.wikidoc.org/index.php/Premedication
f55732901bd2751de2c9866e4d800cec34fe9257
wikidoc
Prenatal care
Prenatal care # Overview 'Prenatal care' refers to the medical care recommended for women before and during pregnancy. The aim of good prenatal care is to detect any potential problems early, to prevent them if possible (through recommendations on adequate nutrition, exercise, vitamin intake etc), and to direct the woman to appropriate specialists, hospitals, etc. if necessary. The availability of routine prenatal care has played a part in reducing maternal death rates and miscarriages as well as birth defects, low birth weight, and other preventable infant problems in the developed world. While availability of prenatal care has considerable personal health and social benefits, socioeconomic problems prevent its universal adoption in many developed as well as developing nations. Studies in Canada and the United States have shown that communities in rural areas as well as minorities are less likely to have available prenatal care and also have higher rates of infant mortality and miscarriage. One prenatal practice is for the expecting mother to consume vitamins with at least 400 mcg of folic acid to help prevent neural tube defects. Prenatal care generally consists of: - monthly visits during the first two trimesters (from week 1-28) - biweekly from 28 to week 36 of pregnancy - weekly after week 36 (delivery at week 38-40) # Physical examinations Physical examinations generally consist of: - collection of (mother's) medical history - checking (mother's) blood pressure - (mother's) height and weight - pelvic exam - (mother's) blood and urine tests - discussion with caregiver # Ultrasound Obstetric ultrasounds are most commonly performed during the second trimester at approximately week 20. Ultrasounds are considered relatively safe and have been used for over 35 years for monitoring pregnancy. Among other things, ultrasounds are used to: - Diagnose pregnancy (uncommon) - Check for multiple fetuses - Determine the sex of the fetus - Assess possible risks to the mother (e.g., miscarriage, blighted ovum, ectopic pregnancy, or a molar pregnancy condition) - Check for fetal malformation (e.g., club foot, spina bifida, cleft palate, clenched fists) - Determine if an intrauterine growth retardation condition exists - Note the development of fetal body parts (e.g., heart, brain, liver, stomach, skull, other bones) - Check the amniotic fluid and umbilical cord for possible problems - Determine due date (based on measurements and relative developmental progress) Generally an Ultrasound is ordered whenever an abnormality is suspected or along a schedule similar to the following: - 7 weeks - confirm pregnancy, ensure its neither molar or ectopic, determine due date - 13-14 weeks (some areas) - evaluate the possibility of Down Syndrome - 18-20 weeks - see the expanded list above - 34 weeks (some areas) - evaluate size, verify placental position
Prenatal care For patient information, click here Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1] # Overview 'Prenatal care' refers to the medical care recommended for women before and during pregnancy. The aim of good prenatal care is to detect any potential problems early, to prevent them if possible (through recommendations on adequate nutrition, exercise, vitamin intake etc), and to direct the woman to appropriate specialists, hospitals, etc. if necessary. The availability of routine prenatal care has played a part in reducing maternal death rates and miscarriages as well as birth defects, low birth weight, and other preventable infant problems in the developed world[citation needed]. While availability of prenatal care has considerable personal health and social benefits, socioeconomic problems prevent its universal adoption in many developed as well as developing nations. Studies in Canada and the United States have shown that communities in rural areas as well as minorities are less likely to have available prenatal care and also have higher rates of infant mortality and miscarriage. One prenatal practice is for the expecting mother to consume vitamins with at least 400 mcg of folic acid to help prevent neural tube defects. Prenatal care generally consists of: - monthly visits during the first two trimesters (from week 1-28) - biweekly from 28 to week 36 of pregnancy - weekly after week 36 (delivery at week 38-40) # Physical examinations Physical examinations generally consist of: - collection of (mother's) medical history - checking (mother's) blood pressure - (mother's) height and weight - pelvic exam - (mother's) blood and urine tests - discussion with caregiver # Ultrasound Obstetric ultrasounds are most commonly performed during the second trimester at approximately week 20. Ultrasounds are considered relatively safe and have been used for over 35 years for monitoring pregnancy. Among other things, ultrasounds are used to: - Diagnose pregnancy (uncommon) - Check for multiple fetuses - Determine the sex of the fetus - Assess possible risks to the mother (e.g., miscarriage, blighted ovum, ectopic pregnancy, or a molar pregnancy condition) - Check for fetal malformation (e.g., club foot, spina bifida, cleft palate, clenched fists) - Determine if an intrauterine growth retardation condition exists - Note the development of fetal body parts (e.g., heart, brain, liver, stomach, skull, other bones) - Check the amniotic fluid and umbilical cord for possible problems - Determine due date (based on measurements and relative developmental progress) Generally an Ultrasound is ordered whenever an abnormality is suspected or along a schedule similar to the following: - 7 weeks - confirm pregnancy, ensure its neither molar or ectopic, determine due date - 13-14 weeks (some areas) - evaluate the possibility of Down Syndrome - 18-20 weeks - see the expanded list above - 34 weeks (some areas) - evaluate size, verify placental position
https://www.wikidoc.org/index.php/Prenatal_care
9e086f012858bf8cae9941a60e86dfed7dbe53a6
wikidoc
Preparation H
Preparation H Preparation H is a popular brand of medications used in the treatment of hemorrhoids. It was originally packaged in a tube like toothpaste, with a similar consistency. Wyeth, the maker of Preparation H, has also released the product in a suppository form, which is not as popular as the cream. Preparation H dates from about 1935. The company now named Wyeth was incorporated in 1926 as American Home Products, or AHP, and "one of AHP's earliest prizes was the acquisition of a sunburn oil in 1935 that the company transformed into Preparation H, which became one of the world's best-selling hemorrhoid treatments." AHP changed its name to Wyeth in 2002. # Formulations Preparation H products come in a variety of formulations. Some are water based gel; some are petroleum jelly based. They range from simple moisturizers with witch-hazel astringent to preparations containing pharmacological ingredients such as phenylephrine, pramoxine and hydrocortisone. Some also contain ingredients of uncertain properties such as aloe vera, shark liver oil and yeast extract. Formulations available also vary with country. An active ingredient in some Preparation H products is phenylephrine in a 0.25% concentration, a drug which constricts blood vessels. This drug is more commonly used as a decongestant in cold medications since restricting blood flow in the sinuses will reduce the amount of mucus they create. Since hemorrhoids are caused by inflamed blood vessels, this can reduce their size. Preparation H with hydrocortisone has only hydrocortisone as its active ingredient, in a 1% concentration. A witch hazel medicated wipe is also available under the Preparation H brand. The Canadian formulation of Preparation H includes a yeast extract called BioDyne which has been removed from the formulation sold in the United States. This yeast extract is believed by many to remove wrinkles from skin and heal dry, cracked, and irritated skin. Thus the Canadian formulation has acquired a market in the United States as a skin cream. Although much has been written against this practice, Preparation H is sometimes recommended as part of tattoo aftercare. The thought is that same properties that help soothe anal irritation also make it useful for calming the skin of a freshly implanted tattoo. Some claim it is less damaging to the tattoo than Petroleum jelly, which can have a tendency to pull ink out of a fresh design. There is no supporting evidence that either is true. For formations containing vasonconstrictor, this property reduces the amount of bleeding, by narrowing the blood vessels that supply the surface of the skin. It is also said to help prevent the formation of scar tissue when the tattoo heals. Dr. Jeff Herndon, resident assistant professor at the Dept. of Medicinal Chemistry at Virginia Commonwealth University's Medical College - referring to the formulation containing yeast extract and shark liver oil - says Preparation H should NOT be used for tattoos. Plastic surgeons suggest Preparation-H can be used on the healing skin to prevent itching, because if you scratched the new skin before it heals into place, you could tear it loose. In the 1960s Preparation H used the slogan "Effective even in cases of long standing". For years it was rumored that Preparation H was the most shoplifted item in US supermarkets, because customers were embarrassed when getting to the cash register.
Preparation H Preparation H is a popular brand of medications used in the treatment of hemorrhoids. It was originally packaged in a tube like toothpaste, with a similar consistency. Wyeth, the maker of Preparation H, has also released the product in a suppository form, which is not as popular as the cream. Preparation H dates from about 1935. The company now named Wyeth was incorporated in 1926 as American Home Products, or AHP, and "one of AHP's earliest prizes was the acquisition of a sunburn oil in 1935 that the company transformed into Preparation H, which became one of the world's best-selling hemorrhoid treatments."[1] AHP changed its name to Wyeth in 2002. # Formulations Preparation H products come in a variety of formulations. Some are water based gel; some are petroleum jelly based. They range from simple moisturizers with witch-hazel astringent to preparations containing pharmacological ingredients such as phenylephrine, pramoxine and hydrocortisone. Some also contain ingredients of uncertain properties such as aloe vera, shark liver oil and yeast extract.[2] Formulations available also vary with country. An active ingredient in some Preparation H products is phenylephrine in a 0.25% concentration, a drug which constricts blood vessels. This drug is more commonly used as a decongestant in cold medications since restricting blood flow in the sinuses will reduce the amount of mucus they create. Since hemorrhoids are caused by inflamed blood vessels, this can reduce their size. Preparation H with hydrocortisone has only hydrocortisone as its active ingredient, in a 1% concentration. A witch hazel medicated wipe is also available under the Preparation H brand. The Canadian formulation of Preparation H includes a yeast extract called BioDyne which has been removed from the formulation sold in the United States. This yeast extract is believed by many to remove wrinkles from skin and heal dry, cracked, and irritated skin. Thus the Canadian formulation has acquired a market in the United States as a skin cream. Although much has been written against this practice, Preparation H is sometimes recommended as part of tattoo aftercare. The thought is that same properties that help soothe anal irritation also make it useful for calming the skin of a freshly implanted tattoo. Some claim it is less damaging to the tattoo than Petroleum jelly, which can have a tendency to pull ink out of a fresh design. There is no supporting evidence that either is true. For formations containing vasonconstrictor, this property reduces the amount of bleeding, by narrowing the blood vessels that supply the surface of the skin. It is also said to help prevent the formation of scar tissue when the tattoo heals. Dr. Jeff Herndon, resident assistant professor at the Dept. of Medicinal Chemistry at Virginia Commonwealth University's Medical College - referring to the formulation containing yeast extract and shark liver oil - says Preparation H should NOT be used for tattoos.[3] Plastic surgeons suggest Preparation-H can be used on the healing skin to prevent itching, because if you scratched the new skin before it heals into place, you could tear it loose. In the 1960s Preparation H used the slogan "Effective even in cases of long standing". For years it was rumored that Preparation H was the most shoplifted item in US supermarkets, because customers were embarrassed when getting to the cash register.
https://www.wikidoc.org/index.php/Preparation_H
2e704fbd27ae5eb7df2dcb1cfa515c8e193b8c7f
wikidoc
Septum primum
Septum primum # Overview The cavity of the primitive atrium becomes subdivided into right and left chambers by a septum, the septum primum, which grows downward into the cavity. # Pathophysiology ## Gross Pathology - Atrial Septal Defect, Septum Primum; View from Right Atrium (a 4 month old baby) - Atrial Septal Defect, Septum Primum; Also Cleft in Anterior Cusp of Mitral Valve
Septum primum Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1] # Overview The cavity of the primitive atrium becomes subdivided into right and left chambers by a septum, the septum primum, which grows downward into the cavity. # Pathophysiology ## Gross Pathology - Atrial Septal Defect, Septum Primum; View from Right Atrium (a 4 month old baby) - Atrial Septal Defect, Septum Primum; Also Cleft in Anterior Cusp of Mitral Valve
https://www.wikidoc.org/index.php/Primary_septum
812982f55a1be586a9382b3aa3c3aef5810a062b
wikidoc
Primula veris
Primula veris Primula veris (Cowslip; syn. Primula officinalis Hill) is a flowering plant in the genus Primula. The species is native throughout most of temperate Europe and Asia, and although absent from more northerly areas including much of north-westScotland, it reappears in northernmost Sutherland and Orkney. It is a low growing herbaceous perennial plant with a rosette of leaves 5-15 cm long and 2-6 cm broad. The deep yellow flowers are produced in the spring between April and May; they are in clusters of 10-30 together on a single stem 5-20 cm tall, each flower 9-15 mm broad. Red-flowered plants do occur, very rarely. It is frequently found on more open ground than Primula vulgaris (Primrose) including open fields, meadows, and coastal dunes and clifftops. It is often included in wild-flower seed mixes used to landscape motorway banks and similar civil engineering earth-works where it may be seen in dense stands. It may be confused with the closely related Primula elatior (Oxlip) which has a similar general appearance although the Oxlip has larger, pale yellow flowers more like a Primrose, and a corolla tube without folds. Cowslip is a favourite food of wild rabbits. # Folklore and herbalism It is used medicinally as a diuretic, an expectorant, and an antispasmodic, as well as for the treatment of headaches, whooping cough, tremors, and other conditions. However it can have irritant effects in people who are allergic to it Cowslips were made into wine, and also to flavour conventional wines. An ancient name for the plant is "paigle" (origin unknown). Another name, herb Peter, derives from the tale of St. Peter dropping the keys to the Gates of Heaven, with the cowslip springing from the spot. In the nineteenth century, cowslips were used as a garland on maypoles. The Cowslip is the county flower of four counties in England, these are Essex, Northamptonshire, Surrey, and Worcestershire.
Primula veris Primula veris (Cowslip; syn. Primula officinalis Hill) is a flowering plant in the genus Primula. The species is native throughout most of temperate Europe and Asia, and although absent from more northerly areas including much of north-westScotland, it reappears in northernmost Sutherland and Orkney[1]. It is a low growing herbaceous perennial plant with a rosette of leaves 5-15 cm long and 2-6 cm broad. The deep yellow flowers are produced in the spring between April and May; they are in clusters of 10-30 together on a single stem 5-20 cm tall, each flower 9-15 mm broad. Red-flowered plants do occur, very rarely. It is frequently found on more open ground than Primula vulgaris (Primrose) including open fields, meadows, and coastal dunes and clifftops. It is often included in wild-flower seed mixes used to landscape motorway banks and similar civil engineering earth-works where it may be seen in dense stands. It may be confused with the closely related Primula elatior (Oxlip) which has a similar general appearance although the Oxlip has larger, pale yellow flowers more like a Primrose, and a corolla tube without folds. Cowslip is a favourite food of wild rabbits. # Folklore and herbalism It is used medicinally as a diuretic, an expectorant, and an antispasmodic, as well as for the treatment of headaches, whooping cough, tremors, and other conditions. However it can have irritant effects in people who are allergic to it[2] Cowslips were made into wine, and also to flavour conventional wines. An ancient name for the plant is "paigle" (origin unknown). Another name, herb Peter, derives from the tale of St. Peter dropping the keys to the Gates of Heaven, with the cowslip springing from the spot. In the nineteenth century, cowslips were used as a garland on maypoles. The Cowslip is the county flower of four counties in England, these are Essex, Northamptonshire, Surrey, and Worcestershire.
https://www.wikidoc.org/index.php/Primula_veris
9a7ed4c335b3ec0ea1705820bc2c54fa7bf17cfc
wikidoc
Pristinamycin
Pristinamycin # Overview Pristinamycin (INN), also spelled pristinamycine, is an antibiotic used primarily in the treatment of staphylococcal infections, and to a lesser extent streptococcal infections. It is a streptogramin group antibiotic, similar to virginiamycin, derived from the bacterium Streptomyces pristina spiralis. It is marketed in Europe by Sanofi-Aventis under the trade name Pyostacine. Pristinamycin is a mixture of two components that have a synergistic antibacterial action. Pristinamycin I is a macrolide, and results in pristinamycin having a similar spectrum of action to erythromycin. Pristinamycin II is a depsipeptide. # Clinical use Despite the macrolide component, it is effective against erythromycin-resistant staphylococci and strepcococci. Importantly, it is active against methicillin-resistant Staphylococcus aureus (MRSA). Its usefulness for severe infections, however, may be limited by the lack of an intravenous formulation owing to its poor solubility. Nevertheless it is sometimes used as an alternative to rifampicin+fusidic acid or linezolid for the treatment of MRSA. The lack of an intravenous formulation led to the development of the pristinamycin-derivative quinupristin/dalfopristin, which may be administered intravenously for more severe MRSA infections.
Pristinamycin Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1] # Overview Pristinamycin (INN), also spelled pristinamycine, is an antibiotic used primarily in the treatment of staphylococcal infections, and to a lesser extent streptococcal infections. It is a streptogramin group antibiotic, similar to virginiamycin, derived from the bacterium Streptomyces pristina spiralis. It is marketed in Europe by Sanofi-Aventis under the trade name Pyostacine. Pristinamycin is a mixture of two components that have a synergistic antibacterial action. Pristinamycin I is a macrolide, and results in pristinamycin having a similar spectrum of action to erythromycin. Pristinamycin II is a depsipeptide.[1] # Clinical use Despite the macrolide component, it is effective against erythromycin-resistant staphylococci and strepcococci.[2][3] Importantly, it is active against methicillin-resistant Staphylococcus aureus (MRSA). Its usefulness for severe infections, however, may be limited by the lack of an intravenous formulation owing to its poor solubility.[4] Nevertheless it is sometimes used as an alternative to rifampicin+fusidic acid or linezolid for the treatment of MRSA. The lack of an intravenous formulation led to the development of the pristinamycin-derivative quinupristin/dalfopristin, which may be administered intravenously for more severe MRSA infections.
https://www.wikidoc.org/index.php/Pristinamycin
4f274d96792e9e1ec57cd013e8a3fa958733f049
wikidoc
Proarrhythmia
Proarrhythmia Proarrhythmia is a new or more frequent occurrence of pre-existing arrhythmias, paradoxically precipitated by antiarrhythmic therapy, which means it is a side-effect associated with the administration of some existing antiarrhythmic drugs, as well as drugs for other indications. In other words, it is a tendency of antiarrhythmic drugs to facilitate emergence of new arrhythmias. # Types of Proarrhythmia According to the Vaughan Williams (VW) Classification of antiarrhythmic drugs, there are 3 main types of Proarrhythmia during treatment with various antiarrhythmic drugs for Atrial Fibrillation or Atrial flutter: ## Ventricular proarrhythmia - Torsade de pointes (VW type IA and type III drugs) - Sustained monomorphic ventricular tachycardia (usually VW type IC drugs) - Sustained polymorphic ventricular tachycardia/ventricular fibrillation without long QT (VQ types IA, IC, and III drugs) ## Atrial proarrhythmia - Conversion of atrial fribrillation to flutter (usually VW type IC drugs or amiodarone). May be a desired effect. - Increase of defibrillation threshold (a potential problem with VW type IC drugs) - Provocation of recurrence (probably VW types IA, IC and III drugs). It is rare. ## Abnormalities of conduction or impulse formation - Sinus node dysfunction, atrioventricular block (almost all drugs) - Accelerate conduction over accessory pathway (digoxin, intravenous verapamil, or diltiazem) - Acceleration of ventricular rate during atrial fibrillation (VW type IA and type IC drugs). # Increased risk - Presence of structural heart disease, especially LV systolic dysfunction. - Class IC agents. - Increased age. - Females. # Clinical pointers ## Class IA drugs - Dose independent, occurring at normal levels. - Follow QT interval, keep ms. ## Class IC drugs - May be provoked by increased heart rate. - Exercise stress tests after loading. ## Class III drugs - Dose dependent. - Follow bradycardia, prolonged QT closely.
Proarrhythmia Proarrhythmia is a new or more frequent occurrence of pre-existing arrhythmias, paradoxically precipitated by antiarrhythmic therapy, which means it is a side-effect associated with the administration of some existing antiarrhythmic drugs, as well as drugs for other indications. In other words, it is a tendency of antiarrhythmic drugs to facilitate emergence of new arrhythmias. # Types of Proarrhythmia According to the Vaughan Williams (VW) Classification of antiarrhythmic drugs, there are 3 main types of Proarrhythmia during treatment with various antiarrhythmic drugs for Atrial Fibrillation or Atrial flutter: ## Ventricular proarrhythmia - Torsade de pointes (VW type IA and type III drugs) - Sustained monomorphic ventricular tachycardia (usually VW type IC drugs) - Sustained polymorphic ventricular tachycardia/ventricular fibrillation without long QT (VQ types IA, IC, and III drugs) ## Atrial proarrhythmia - Conversion of atrial fribrillation to flutter (usually VW type IC drugs or amiodarone). May be a desired effect. - Increase of defibrillation threshold (a potential problem with VW type IC drugs) - Provocation of recurrence (probably VW types IA, IC and III drugs). It is rare. ## Abnormalities of conduction or impulse formation - Sinus node dysfunction, atrioventricular block (almost all drugs) - Accelerate conduction over accessory pathway (digoxin, intravenous verapamil, or diltiazem) - Acceleration of ventricular rate during atrial fibrillation (VW type IA and type IC drugs). # Increased risk - Presence of structural heart disease, especially LV systolic dysfunction. - Class IC agents. - Increased age. - Females. # Clinical pointers ## Class IA drugs - Dose independent, occurring at normal levels. - Follow QT interval, keep ms. ## Class IC drugs - May be provoked by increased heart rate. - Exercise stress tests after loading. ## Class III drugs - Dose dependent. - Follow bradycardia, prolonged QT closely. # External links - Mechanisms and management of proarrhythmia Template:Disease-stub
https://www.wikidoc.org/index.php/Proarrhythmia
bde3229b2b6ef5bbd88e77b13c471d59d8b014a0
wikidoc
Procalcitonin
Procalcitonin # Overview Procalcitonin (PCT) is a precursor of the hormone calcitonin, which is involved with calcium homeostasis, and is produced by the C-cells of the thyroid gland. It is there that procalcitonin is cleaved into calcitonin, katacalcin and a protein residue. It is not released into the blood stream of healthy individuals. With the derangements that a severe infection with an associated systemic response brings, the blood levels of procalcitonin may rise to 100 ng/ml. In blood serum, procalcitonin has a half-life of 25 to 30 hours. The test is commercially available and produced by Thermo Fisher Scientific. Triggering receptor expressed on myeloid cells-1 (TREM1) may be a more accurate serum biomarker for diagnosing infection. Comparisons of the procalcitonin and c-reactive protein give conflicting results. # Uses ## Diagnosis and prognosis of sepsis Measurement of procalcitonin can be used as a marker of severe sepsis and generally grades well with the degree of sepsis, although levels of procalcitonin in the blood are very low. In a cross-sectional study PCT has the greatest sensitivity (85%) and specificity (91%) for differentiating patients with SIRS from those with sepsis, when compared with IL-2, IL-6, IL-8, CRP and TNF-alpha. However, the test is not routinely used and has yet to gain widespread acceptance. A review for diagnosing sepsis in 2013 of 30 studies: - Sensitivity 77% (95% CI 72% - 81%) - Specificity 79% (95% CI 74% – 84%) A review for diagnosing sepsis in 2007 of 18 studies: - Sensitivity 71% (95% CI 67–76) - Specificity 71% (95% CI 67–76) Subsequent meta-analyses have summarized the relevant studies for diagnosing sepsis among immunocompromised patients. ## Diagnosis of bacteremia Meta-analyses are available. A meta-analysis reported a sensitivity of 76% and specificity of 70%. Diagnosis of bacteremia in the elderly has been studied. - Sensitivity 96% - Specificity 68% Diagnosis of bacteremia in the neutropenic patients with Systemic inflammatory response syndrome (SIRS) suggests lower sensitivity and higher specificity due to lower PCT levels in neutropenic patients. ## Prognosis of pneumonia Various algorithms for interpreting and responding to the procalcitonin level are available. A cluster randomized trial found that the procalcitonin level can help guide antibiotic therapy. In this trial, "on the basis of serum procalcitonin concentrations, use of antibiotics was more or less discouraged ( or =0.5 microg/L or > or =0.25 microg/L), respectively".. However, a nonrandomized, observational study reported "limited, prognostic value" of the procalcitonin. Procalcitonin has been used in prediction of mortality in community-acquired pneumonia: - Sensitivity 35% - Specificity 92% ## Guiding antibiotic therapy ### Subjects with sepsis Low quality evidence suggests that PCT-guided therapy may aid antimicrobial stewardship without change in mortality. However, the results of trials are heterogeneous and in most trials, the quality of care in the control group . An earlier systematic review by the Cochrane Collaboration concluded that among subjects whose care was guided by procalcitonin "antibiotic consumption was significantly reduced". The first author of this analysis receives financial support from the manufacturer of the procalctonin test. ### Subjects with lower respiratory tract infection PCT-guided therapy does not reduce mortality. An earlier meta-analysis by the Cochrane Collaboration concluded that mortality is reduced.
Procalcitonin Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]; Associate Editor(s)-in-Chief: Robert G. Badgett, M.D. [2] # Overview Procalcitonin (PCT) is a precursor of the hormone calcitonin, which is involved with calcium homeostasis, and is produced by the C-cells of the thyroid gland. It is there that procalcitonin is cleaved into calcitonin, katacalcin and a protein residue. It is not released into the blood stream of healthy individuals. With the derangements that a severe infection with an associated systemic response brings, the blood levels of procalcitonin may rise to 100 ng/ml. In blood serum, procalcitonin has a half-life of 25 to 30 hours. The test is commercially available and produced by Thermo Fisher Scientific. Triggering receptor expressed on myeloid cells-1 (TREM1) may be a more accurate serum biomarker for diagnosing infection.[1][2] Comparisons of the procalcitonin and c-reactive protein give conflicting results.[3][4] # Uses ## Diagnosis and prognosis of sepsis Measurement of procalcitonin can be used as a marker of severe sepsis and generally grades well with the degree of sepsis,[3] although levels of procalcitonin in the blood are very low. In a cross-sectional study PCT has the greatest sensitivity (85%) and specificity (91%) for differentiating patients with SIRS from those with sepsis, when compared with IL-2, IL-6, IL-8, CRP and TNF-alpha.[4] However, the test is not routinely used and has yet to gain widespread acceptance. A review for diagnosing sepsis in 2013 of 30 studies:[5] - Sensitivity 77% (95% CI 72% - 81%) - Specificity 79% (95% CI 74% – 84%) A review for diagnosing sepsis in 2007 of 18 studies:[6] - Sensitivity 71% (95% CI 67–76) - Specificity 71% (95% CI 67–76) Subsequent meta-analyses have summarized the relevant studies for diagnosing sepsis among immunocompromised patients[7]. ## Diagnosis of bacteremia Meta-analyses are available.[8] A meta-analysis reported a sensitivity of 76% and specificity of 70%.[9] Diagnosis of bacteremia in the elderly has been studied.[10] - Sensitivity 96% - Specificity 68% Diagnosis of bacteremia in the neutropenic patients with Systemic inflammatory response syndrome (SIRS) suggests lower sensitivity and higher specificity due to lower PCT levels in neutropenic patients.[3] ## Prognosis of pneumonia Various algorithms for interpreting and responding to the procalcitonin level are available[11]. A cluster randomized trial found that the procalcitonin level can help guide antibiotic therapy. In this trial, "on the basis of serum procalcitonin concentrations, use of antibiotics was more or less discouraged (<0.1 microg/L or <0.25 microg/L) or encouraged (> or =0.5 microg/L or > or =0.25 microg/L), respectively".[12]. However, a nonrandomized, observational study reported "limited, prognostic value" of the procalcitonin[13]. Procalcitonin has been used in prediction of mortality in community-acquired pneumonia:[14] - Sensitivity 35% - Specificity 92% ## Guiding antibiotic therapy ### Subjects with sepsis Low quality evidence suggests that PCT-guided therapy may aid antimicrobial stewardship without change in mortality.[15] However, the results of trials are heterogeneous and in most trials, the quality of care in the control group [had not been optimized]. An earlier systematic review by the Cochrane Collaboration concluded that among subjects whose care was guided by procalcitonin "antibiotic consumption was significantly reduced".[16] The first author of this analysis receives financial support from the manufacturer of the procalctonin test.[17] ### Subjects with lower respiratory tract infection PCT-guided therapy does not reduce mortality.[18] An earlier meta-analysis by the Cochrane Collaboration concluded that mortality is reduced[19].
https://www.wikidoc.org/index.php/Procalcitonin
bba88d46e1d9f9c281afe4034d3da3b88f3b3fd0
wikidoc
Project Harar
Project Harar Please Join in Editing This Page and Apply to be an Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. # Overview Project Harar, also known under the working name Project Harar Ethiopia, is a UK registered charity working in Ethiopia to help children affected by facial disfigurements. Since 2002 Project Harar has secured treatment and aftercare for over 500 children and young patients living in poverty and isolation. Project Harar works in collaboration with Ethiopian and foreign plastic, oral and maxillofacial surgeons and other specialists to treat children affected by a variety of conditions and give them normal facial function and a chance to live dignified lives, included in their home community. # History Project Harar was founded after Jonathan Crown, a London-based Chartered Accountant and businessman on a photography vacation,encountered two young boys with facial disfigurements, Fhami and Jemal, begging in the town of Harar, eastern Ethiopia, in 2001. Moved to do something to help, Jonathan Crown spent months organising the trip that would bring Fhami and Jemal to The Gambia, where they received highly complex surgery on board of the M/V Anastasis operated by the charity Mercy Ships. Since that first trip, Project Harar has collaborated with other charities and the Ethiopian health system, so that patients now receive treatment within Ethiopia, carried out by Ethiopian and volunteer foreign surgeons in hospitals in the capital Addis Ababa. In 2006, the English actor John Hurt became Project Harar's first patron. In autumn 2007, Project Harar was featured in two BBC World Service programmes on noma and the treatment of patients from remote regions. In November 2007, a documentary film made by BBC Inside Out featured a group of severely-affected patients from the Hararghe and Somali regions of Ethiopia who underwent treatment by a team of UK medical volunnteers, organised by the noma charity Facing Africa. # Operation Project Harar is a health outreach charity functioning as a bridge between those who could benefit from facial reconstructive surgery and the centralised Ethiopian health services. Its Ethiopian staff work in remote rural areas, liaising closely with local health administrators and extension workers, to locate and support children with facial disfigurements, who often face stigma and social exclusion. The children and their families are informed about the possibilities of professional medical care and, if they decide to be assessed further for surgery, Project Harar covers all costs related to reaching and staying in hospital in the capital city, as well as the cost of prescription medicines and other after-care costs. Project Harar always arranges for a guardian to accompany the young patients and support them through their recovery process. After surgery, Project Harar promotes the full integration of children back into community and family life, carrying out follow-up visits and playing a role in the reduction of stigma against people living with a facial disfigurement. With the restoration of facial functions (chewing and swallowing, speech, salival continence, facial expression) and improved appearance, Project Harar children are often given for the first time the opportunity to attend school. Through its close collaboration with Ethiopian health services and foreign specialists, Project Harar contributes to training opportunities for local health professionals and, in this way, helps to advance the surgical capacity of Ethiopia. Project Harar operates mainly in the Oromia Region, including the zones of Misraq (East) Hararghe and Mirab (West) Hararghe which take in the towns of Harar and Asebe Teferi. The charity also covers parts of the Somali Region, including Jijiga, and the chartered city of Dire Dawa. In 2008, Project Harar secured the treatment of 290 patients with an income of £ £114,856. # Conditions treated Project Harar helps children and other individuals living with a treatable facial disfigurement, which can be caused by a number of conditions. These include: - cleft lip and palate - noma - a devastating form of gangrene that attacks the tissue of the face - tumour and ameloblastoma - animal attack injuries and bite wounds - burns and other accidental injuries
Project Harar Editors-In-Chief: Martin I. Newman, M.D., FACS, Cleveland Clinic Florida, [1]; Michel C. Samson, M.D., FRCSC, FACS [2] Please Join in Editing This Page and Apply to be an Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us [3] to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. # Overview Project Harar, also known under the working name Project Harar Ethiopia, is a UK registered charity working in Ethiopia to help children affected by facial disfigurements. Since 2002 Project Harar has secured treatment and aftercare for over 500 children and young patients living in poverty and isolation. Project Harar works in collaboration with Ethiopian and foreign plastic, oral and maxillofacial surgeons and other specialists to treat children affected by a variety of conditions and give them normal facial function and a chance to live dignified lives, included in their home community. # History Project Harar was founded after Jonathan Crown, a London-based Chartered Accountant and businessman on a photography vacation,encountered two young boys with facial disfigurements, Fhami and Jemal, begging in the town of Harar, eastern Ethiopia, in 2001. Moved to do something to help, Jonathan Crown spent months organising the trip that would bring Fhami and Jemal to The Gambia, where they received highly complex surgery on board of the M/V Anastasis operated by the charity Mercy Ships.[1] Since that first trip, Project Harar has collaborated with other charities and the Ethiopian health system, so that patients now receive treatment within Ethiopia, carried out by Ethiopian and volunteer foreign surgeons in hospitals in the capital Addis Ababa. In 2006, the English actor John Hurt became Project Harar's first patron. In autumn 2007, Project Harar was featured in two BBC World Service programmes on noma and the treatment of patients from remote regions. In November 2007, a documentary film made by BBC Inside Out featured a group of severely-affected patients from the Hararghe and Somali regions of Ethiopia who underwent treatment by a team of UK medical volunnteers, organised by the noma charity Facing Africa.[2] # Operation Project Harar is a health outreach charity functioning as a bridge between those who could benefit from facial reconstructive surgery and the centralised Ethiopian health services. Its Ethiopian staff work in remote rural areas, liaising closely with local health administrators and extension workers, to locate and support children with facial disfigurements, who often face stigma and social exclusion[3]. The children and their families are informed about the possibilities of professional medical care and, if they decide to be assessed further for surgery, Project Harar covers all costs related to reaching and staying in hospital in the capital city, as well as the cost of prescription medicines and other after-care costs. Project Harar always arranges for a guardian to accompany the young patients and support them through their recovery process. After surgery, Project Harar promotes the full integration of children back into community and family life, carrying out follow-up visits and playing a role in the reduction of stigma against people living with a facial disfigurement. With the restoration of facial functions (chewing and swallowing, speech, salival continence, facial expression) and improved appearance, Project Harar children are often given for the first time the opportunity to attend school. Through its close collaboration with Ethiopian health services and foreign specialists, Project Harar contributes to training opportunities for local health professionals and, in this way, helps to advance the surgical capacity of Ethiopia. Project Harar operates mainly in the Oromia Region, including the zones of Misraq (East) Hararghe and Mirab (West) Hararghe which take in the towns of Harar and Asebe Teferi. The charity also covers parts of the Somali Region, including Jijiga, and the chartered city of Dire Dawa. In 2008, Project Harar secured the treatment of 290 patients with an income of £ £114,856[4]. # Conditions treated Project Harar helps children and other individuals living with a treatable facial disfigurement, which can be caused by a number of conditions. These include: - cleft lip and palate - noma - a devastating form of gangrene that attacks the tissue of the face - tumour and ameloblastoma - animal attack injuries and bite wounds - burns and other accidental injuries # External links - Project Harar The organisation's official website
https://www.wikidoc.org/index.php/Project_Harar
7c8a27b2f14196523543e24ee882a5b2556e6edb
wikidoc
Pronunciation
Pronunciation Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. # Overview Pronunciation refers to: - the way a word or a language is usually spoken - the manner in which someone utters a word A word can be spoken in different ways by various individuals or groups, depending on many factors, such as: - the area in which they grew up - the area in which they now live - if they have a speech defect - their ethnic group - their social class - their education # Linguistic terminology People are counted as units of sound (phones) that they use in their language. The branch of linguistics which studies these units of sound is phonetics. Phones which play the same role are grouped together into classes called phonemes; the study of these is phonemics or phonematics or phonology.
Pronunciation Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us [1] to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. # Overview Pronunciation refers to: - the way a word or a language is usually spoken - the manner in which someone utters a word A word can be spoken in different ways by various individuals or groups, depending on many factors, such as: - the area in which they grew up - the area in which they now live - if they have a speech defect - their ethnic group - their social class - their education # Linguistic terminology People are counted as units of sound (phones) that they use in their language. The branch of linguistics which studies these units of sound is phonetics. Phones which play the same role are grouped together into classes called phonemes; the study of these is phonemics or phonematics or phonology.
https://www.wikidoc.org/index.php/Pronounce
00fea6a5922c977c9722d80ba3d38b01bcb60c09
wikidoc
Prostaglandin
Prostaglandin # Overview A prostaglandin is any member of a group of lipid compounds that are derived enzymatically from fatty acids and have important functions in the animal body. Every prostaglandin contains 20 carbon atoms, including a 5-carbon ring. They are mediators and have a variety of strong physiological effects; although they are technically hormones, they are rarely classified as such. The prostaglandins together with the thromboxanes and prostacyclins form the prostanoid class of fatty acid derivatives; the prostanoid class is a subclass of eicosanoids. # History and name The name prostaglandin derives from the prostate gland. When prostaglandin was first isolated from seminal fluid in 1935 by the Swedish physiologist Ulf von Euler, and independently by M.W. Goldblatt, it was believed to be part of the prostatic secretions (in actuality prostaglandins are produced by the seminal vesicles); it was later shown that many other tissues secrete prostaglandins for various functions. In 1971, it was determined that aspirin-like drugs could inhibit the synthesis of prostaglandins. The biochemists Sune K. Bergström, Bengt I. Samuelsson and John R. Vane jointly received the 1982 Nobel Prize in Physiology or Medicine for their researches on prostaglandins. # Biochemistry ## Biosynthesis Prostaglandins are found in virtually all tissues and organs. These are autocrine and paracrine lipid mediators that act upon platelet, endothelium, uterine and mast cells, among others. They are synthesized in the cell from the essential fatty acids (EFAs). An intermediate is created by phospholipase-A2, then passed into one of either the cyclooxygenase pathway or the lipoxygenase pathway to form either prostaglandin and thromboxane or leukotriene. The cyclooxygenase pathway produces thromboxane, prostacyclin and prostaglandin D, E and F. The lipoxygenase pathway is active in leukocytes and in macrophages and synthesizes leukotrienes. ## Release of prostaglandins from the cell Prostaglandins were originally believed to leave the cells via passive diffusion because of their high lipophilicity. The discovery of the prostaglandin transporter (PGT, SLCO2A1), which mediates the cellular uptake of prostaglandin, demonstrated that diffusion can not explain the penetration of prostaglandin through the cellular membrane. The release of prostaglandin has now also been shown to be mediated by a specific transporter, namely the multidrug resistance protein 4 (MRP4, ABCC4), a member of the ATP-binding cassette transporter superfamily. Whether MRP4 is the only transporter releasing prostaglandins from the cells is still unclear. ### Cyclooxygenases Prostaglandins are produced following the sequential oxidation of AA, DGLA or EPA by cyclooxygenases (COX-1 and COX-2) and terminal prostaglandin synthases. The classic dogma is as follows: - COX-1 is responsible for the baseline levels of prostaglandins. - COX-2 produces prostaglandins through stimulation. However, while COX-1 and COX-2 are both located in the blood vessels, stomach and the kidneys, prostaglandin levels are increased by COX-2 in scenarios of inflammation. ### Prostaglandin E synthase Prostaglandin E2 (PGE2) is generated from the action of prostaglandin E synthases on prostaglandin H2 (PGH2). Several prostaglandin E synthases have been identified. To date, microsomal prostaglandin E synthase-1 emerges as a key enzyme in the formation of PGE2. ### Other terminal prostaglandin synthases Terminal prostaglandin synthases have been identified that are responsible for the formation of other prostaglandins. For example, hematopoietic and lipocalin prostaglandin D synthases (hPGDS and lPGDS) are responsible for the formation of PGD2 from PGH2. Similarly, prostacyclin (PGI2) synthase (PGIS) converts PGH2 into PGI2. A thromboxane synthase (TxAS) has also been idenfitied. Prostaglandin F synthase (PGFS) catalyzes the formation of 9α,11β-PGF2α,β from PGD2 and PGF2α from PGH2 in the presence of NADPH. This enzyme has recently been crystallyzed in complex with PGD2 and bimatoprost (a synthetic analogue of PGF2α). # Function There are currently nine known prostaglandin receptors on various cell types. Prostaglandins ligate a subfamily of cell surface seven-transmembrane receptors, G-protein-coupled receptors. These receptors are termed DP1-2, EP1-4, FP, IP, and TP, corresponding to the receptor that ligates the corresponding prostaglandin (e.g., DP1-2 receptors bind to PGD2). These varied receptors mean that Prostaglandins thus act on a variety of cells, and have a wide variety of actions: - cause constriction or dilatation in vascular smooth muscle cells - cause aggregation or disaggregation of platelets - sensitize spinal neurons to pain - constrict smooth muscle - regulate inflammatory mediation - regulate calcium movement - regulate hormone regulation - control cell growth Prostaglandins are potent but have a short half-life before being inactivated and excreted. Therefore, they exert only a paracrine (locally active) or autocrine (acting on the same cell from which it is synthesized) function. # Role in pharmacology ## Inhibition ## Clinical uses Synthetic prostaglandins are used: - To induce childbirth, parturition or abortion (PGE2 or PGF2, with or without mifepristone, a progesterone antagonist); - To prevent closure of patent ductus arteriosus in newborns with particular cyanotic heart defects (PGE1) - To prevent and treat peptic ulcers (PGE) - As a vasodilator in severe Raynaud's phenomenon or ischemia of a limb - In pulmonary hypertension - In treatment of glaucoma (as in bimatoprost ophthalmic solution, a synthetic prostamide analog with ocular hypotensive activity) - To treat erectile dysfunction or in penile rehabilitation following surgery (PGE1 as alprostadil).
Prostaglandin Editor-In-Chief: C. Michael Gibson, M.S., M.D. [3] # Overview A prostaglandin is any member of a group of lipid compounds that are derived enzymatically from fatty acids and have important functions in the animal body. Every prostaglandin contains 20 carbon atoms, including a 5-carbon ring. They are mediators and have a variety of strong physiological effects; although they are technically hormones, they are rarely classified as such. The prostaglandins together with the thromboxanes and prostacyclins form the prostanoid class of fatty acid derivatives; the prostanoid class is a subclass of eicosanoids. # History and name The name prostaglandin derives from the prostate gland. When prostaglandin was first isolated from seminal fluid in 1935 by the Swedish physiologist Ulf von Euler,[1] and independently by M.W. Goldblatt,[2] it was believed to be part of the prostatic secretions (in actuality prostaglandins are produced by the seminal vesicles); it was later shown that many other tissues secrete prostaglandins for various functions. In 1971, it was determined that aspirin-like drugs could inhibit the synthesis of prostaglandins. The biochemists Sune K. Bergström, Bengt I. Samuelsson and John R. Vane jointly received the 1982 Nobel Prize in Physiology or Medicine for their researches on prostaglandins. # Biochemistry ## Biosynthesis Prostaglandins are found in virtually all tissues and organs. These are autocrine and paracrine lipid mediators that act upon platelet, endothelium, uterine and mast cells, among others. They are synthesized in the cell from the essential fatty acids[3] (EFAs). An intermediate is created by phospholipase-A2, then passed into one of either the cyclooxygenase pathway or the lipoxygenase pathway to form either prostaglandin and thromboxane or leukotriene. The cyclooxygenase pathway produces thromboxane, prostacyclin and prostaglandin D, E and F. The lipoxygenase pathway is active in leukocytes and in macrophages and synthesizes leukotrienes. ## Release of prostaglandins from the cell Prostaglandins were originally believed to leave the cells via passive diffusion because of their high lipophilicity. The discovery of the prostaglandin transporter (PGT, SLCO2A1), which mediates the cellular uptake of prostaglandin, demonstrated that diffusion can not explain the penetration of prostaglandin through the cellular membrane. The release of prostaglandin has now also been shown to be mediated by a specific transporter, namely the multidrug resistance protein 4 (MRP4, ABCC4), a member of the ATP-binding cassette transporter superfamily. Whether MRP4 is the only transporter releasing prostaglandins from the cells is still unclear. ### Cyclooxygenases Prostaglandins are produced following the sequential oxidation of AA, DGLA or EPA by cyclooxygenases (COX-1 and COX-2) and terminal prostaglandin synthases. The classic dogma is as follows: - COX-1 is responsible for the baseline levels of prostaglandins. - COX-2 produces prostaglandins through stimulation. However, while COX-1 and COX-2 are both located in the blood vessels, stomach and the kidneys, prostaglandin levels are increased by COX-2 in scenarios of inflammation. ### Prostaglandin E synthase Prostaglandin E2 (PGE2) is generated from the action of prostaglandin E synthases on prostaglandin H2 (PGH2). Several prostaglandin E synthases have been identified. To date, microsomal prostaglandin E synthase-1 emerges as a key enzyme in the formation of PGE2. ### Other terminal prostaglandin synthases Terminal prostaglandin synthases have been identified that are responsible for the formation of other prostaglandins. For example, hematopoietic and lipocalin prostaglandin D synthases (hPGDS and lPGDS) are responsible for the formation of PGD2 from PGH2. Similarly, prostacyclin (PGI2) synthase (PGIS) converts PGH2 into PGI2. A thromboxane synthase (TxAS) has also been idenfitied. Prostaglandin F synthase (PGFS) catalyzes the formation of 9α,11β-PGF2α,β from PGD2 and PGF2α from PGH2 in the presence of NADPH. This enzyme has recently been crystallyzed in complex with PGD2[4] and bimatoprost[5] (a synthetic analogue of PGF2α). # Function There are currently nine known prostaglandin receptors on various cell types. Prostaglandins ligate a subfamily of cell surface seven-transmembrane receptors, G-protein-coupled receptors. These receptors are termed DP1-2, EP1-4, FP, IP, and TP, corresponding to the receptor that ligates the corresponding prostaglandin (e.g., DP1-2 receptors bind to PGD2). These varied receptors mean that Prostaglandins thus act on a variety of cells, and have a wide variety of actions: - cause constriction or dilatation in vascular smooth muscle cells - cause aggregation or disaggregation of platelets - sensitize spinal neurons to pain - constrict smooth muscle - regulate inflammatory mediation - regulate calcium movement - regulate hormone regulation - control cell growth Prostaglandins are potent but have a short half-life before being inactivated and excreted. Therefore, they exert only a paracrine (locally active) or autocrine (acting on the same cell from which it is synthesized) function. # Role in pharmacology ## Inhibition ## Clinical uses Synthetic prostaglandins are used: - To induce childbirth, parturition or abortion (PGE2 or PGF2, with or without mifepristone, a progesterone antagonist); - To prevent closure of patent ductus arteriosus in newborns with particular cyanotic heart defects (PGE1) - To prevent and treat peptic ulcers (PGE) - As a vasodilator in severe Raynaud's phenomenon or ischemia of a limb - In pulmonary hypertension - In treatment of glaucoma (as in bimatoprost ophthalmic solution, a synthetic prostamide analog with ocular hypotensive activity) - To treat erectile dysfunction or in penile rehabilitation following surgery (PGE1 as alprostadil).[6]
https://www.wikidoc.org/index.php/Prostaglandin
b840a25837d379ddb03d8122a329f43d87d1693b
wikidoc
Prostatectomy
Prostatectomy Steven C. Campbell, M.D., Ph.D. # Overview A prostatectomy is the surgical removal of all or part of the prostate gland. Abnormalities of the prostate, such as a tumour, or if the gland itself becomes enlarged for any reason, can restrict the normal flow of urine along the urethra. # Prostatectomy There are several forms of the operation: - Transurethral resection of the prostate (TURP): a cystoscope is passed up the urethra to the prostate, where the surrounding prostate tissue is excised. This is a common operation for benign prostatic hyperplasia (BPH) and outcomes are excellent for a high percentage of these patients (80-90%). A more refined and safer operation is by means of a holmium high powered "red" laser. This technique has been well documented as being the only laser operation that is of higher standard than the "old" TURP operation. - Open prostatectomy: A surgical procedure involving a skin incision and enucleation of the prostatic adenoma, through the prostatic capsule (RPP-retropubic prostatectomy) or through the bladder (SPP-suprapubic prostatectomy). Reserved for extremely large prostates. - Laparoscopic: a laparoscopic or four small incisions are made in the abdomen, and the entire prostate is removed sparing nerves more easily damaged by a retropubic or suprapubic approach. Laparoscopic prostatectomy has more advantages than the radical perineal or retropubic operation and is more economical than the robot assisted technique. - Robotic-assisted prostatectomy: Laparoscopic robotic arms are controlled by a surgeon. The robot gives the surgeon much more dexterity than conventional laparoscopy while offering the same advantages over open prostatectomy: much smaller incisions, less pain, less bleeding, less risk of infection, faster healing time, and shorter hospital stay.. While the cost of such procedures is high, costs are declining rapidly . - Radical perineal prostatectomy: an incision is made in the perineum, midway between rectum and scrotum, and the prostate is removed. Radical prostatectomy is one of the key treatments for prostate cancer. - Radical retropubic prostatectomy: an incision is made in the lower abdomen, and the prostate removed, by going behind the pubic bone (retropubic). Radical prostatectomy is one of the key treatments for prostate cancer. - Transurethral plasmakinetic vaporization prostatectomy (TUPVP).
Prostatectomy Template:Interventions infobox Steven C. Campbell, M.D., Ph.D. # Overview A prostatectomy is the surgical removal of all or part of the prostate gland. Abnormalities of the prostate, such as a tumour, or if the gland itself becomes enlarged for any reason, can restrict the normal flow of urine along the urethra. # Prostatectomy There are several forms of the operation: - Transurethral resection of the prostate (TURP): a cystoscope is passed up the urethra to the prostate, where the surrounding prostate tissue is excised. This is a common operation for benign prostatic hyperplasia (BPH) and outcomes are excellent for a high percentage of these patients (80-90%). A more refined and safer operation is by means of a holmium high powered "red" laser. This technique has been well documented as being the only laser operation that is of higher standard than the "old" TURP operation. - Open prostatectomy: A surgical procedure involving a skin incision and enucleation of the prostatic adenoma, through the prostatic capsule (RPP-retropubic prostatectomy) or through the bladder (SPP-suprapubic prostatectomy). Reserved for extremely large prostates. - Laparoscopic: a laparoscopic or four small incisions are made in the abdomen, and the entire prostate is removed sparing nerves more easily damaged by a retropubic or suprapubic approach. Laparoscopic prostatectomy has more advantages than the radical perineal or retropubic operation and is more economical than the robot assisted technique. - Robotic-assisted prostatectomy: Laparoscopic robotic arms are controlled by a surgeon. The robot gives the surgeon much more dexterity than conventional laparoscopy while offering the same advantages over open prostatectomy: much smaller incisions, less pain, less bleeding, less risk of infection, faster healing time, and shorter hospital stay.[1]. While the cost of such procedures is high, costs are declining rapidly [2]. - Radical perineal prostatectomy: an incision is made in the perineum, midway between rectum and scrotum, and the prostate is removed. Radical prostatectomy is one of the key treatments for prostate cancer. - Radical retropubic prostatectomy: an incision is made in the lower abdomen, and the prostate removed, by going behind the pubic bone (retropubic). Radical prostatectomy is one of the key treatments for prostate cancer. - Transurethral plasmakinetic vaporization prostatectomy (TUPVP). # External links - Prostate Cancer Treatment Guide
https://www.wikidoc.org/index.php/Prostatectomy
7bc1352466649d8bbdb264da05b827c79fdf26f7
wikidoc
Protothecosis
Protothecosis Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. Protothecosis is a disease found in dogs, cats, cattle, and humans caused by a type of green algae known as Prototheca that lacks chlorophyll. It is the only known infectious pathogen that is also a plant. The two most common species are Prototheca wickerhami and Prototheca zopfii. Both are known to cause disease in dogs, while most human cases are caused by P. wickerhami. Prototheca is found worldwide in sewage and soil. Infection is rare despite high exposure, and can be related to a defective immune system. In dogs, females and Collies are most commonly affected. The first human case was identified in 1964 in Sierra Leone. # The organism Prototheca has been thought to be a mutant of Chlorella, a type of single-celled green algae. However, while Chlorella contains galactose and galactosamine in the cell wall, Prototheca lacks these. Also, Chlorella obtains its energy through photosynthesis, while Prototheca is saprotrophic, feeding on dead and decaying organic matter. When Prototheca was first isolated from slime flux of trees in 1894, it was thought to be a type of fungus. Its size varies from 2 to 15 microns. # Cutaneous protothecosis The two main forms of protothecosis are cutaneous and disseminated. Cats are exclusively infected with the cutaneous, or skin, form. Symptoms include soft lumps on the skin of the ears, legs, feet, nose, and head. Infection usually occurs through a wound in the skin. Humans are also usually affected by the cutaneous form, but immunocompromised individuals may develop disseminated protothecosis. Surgery is the treatment of choice for the cutaneous form. # Prothecosis in cattle Cattle can be affected by protothecal enteritis and mastitis. Protothecal mastitis is endemic worldwide, although most cases of infected herds have been reported in Germany, the United States, and Brazil. # Protothecosis in dogs Disseminated protothecosis is most commonly seen in dogs. The algae enters the body through the mouth or nose and causes infection in the intestines. From there it can spread to the eye, brain, and kidneys. Symptoms can include diarrhea, weight loss, weakness, inflammation of the eye (uveitis), retinal detachment, ataxia, and seizures. Dogs with acute blindness and diarrhea that develop exudative retinal detachment should be assessed for protothecosis. Diagnosis is through culture or finding the organism in a biopsy, cerebrospinal fluid, vitreous humour, or urine. Treatment of the disseminated form in dogs is very difficult, although use of antifungal medication has been successful in a few cases. Prognosis for cutaneous protothecosis is guarded and depends on the surgical options. Prognosis for the disseminated form is grave. This may be due to delayed recognition and treatment.
Protothecosis Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us [1] to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. Protothecosis is a disease found in dogs, cats, cattle, and humans caused by a type of green algae known as Prototheca that lacks chlorophyll. It is the only known infectious pathogen that is also a plant.[1] The two most common species are Prototheca wickerhami and Prototheca zopfii. Both are known to cause disease in dogs, while most human cases are caused by P. wickerhami.[2] Prototheca is found worldwide in sewage and soil. Infection is rare despite high exposure, and can be related to a defective immune system.[3] In dogs, females and Collies are most commonly affected.[4] The first human case was identified in 1964 in Sierra Leone.[5] # The organism Prototheca has been thought to be a mutant of Chlorella, a type of single-celled green algae. However, while Chlorella contains galactose and galactosamine in the cell wall, Prototheca lacks these. Also, Chlorella obtains its energy through photosynthesis, while Prototheca is saprotrophic, feeding on dead and decaying organic matter. When Prototheca was first isolated from slime flux of trees in 1894, it was thought to be a type of fungus.[6] Its size varies from 2 to 15 microns.[7] # Cutaneous protothecosis The two main forms of protothecosis are cutaneous and disseminated. Cats are exclusively infected with the cutaneous, or skin, form.[8] Symptoms include soft lumps on the skin of the ears, legs, feet, nose, and head. Infection usually occurs through a wound in the skin. Humans are also usually affected by the cutaneous form,[2] but immunocompromised individuals may develop disseminated protothecosis.[9] Surgery is the treatment of choice for the cutaneous form. # Prothecosis in cattle Cattle can be affected by protothecal enteritis and mastitis.[10] Protothecal mastitis is endemic worldwide, although most cases of infected herds have been reported in Germany, the United States, and Brazil.[1] # Protothecosis in dogs Disseminated protothecosis is most commonly seen in dogs. The algae enters the body through the mouth or nose and causes infection in the intestines. From there it can spread to the eye, brain, and kidneys. Symptoms can include diarrhea, weight loss, weakness, inflammation of the eye (uveitis), retinal detachment, ataxia, and seizures.[11] Dogs with acute blindness and diarrhea that develop exudative retinal detachment should be assessed for protothecosis. [6] Diagnosis is through culture or finding the organism in a biopsy, cerebrospinal fluid, vitreous humour, or urine. Treatment of the disseminated form in dogs is very difficult, although use of antifungal medication has been successful in a few cases.[4] Prognosis for cutaneous protothecosis is guarded and depends on the surgical options. Prognosis for the disseminated form is grave. This may be due to delayed recognition and treatment.[3]
https://www.wikidoc.org/index.php/Protothecosis
d7a1f31997efecf8ba176cf0529f3f3eba4254f6
wikidoc
Protriptyline
Protriptyline # Disclaimer WikiDoc MAKES NO GUARANTEE OF VALIDITY. WikiDoc is not a professional health care provider, nor is it a suitable replacement for a licensed healthcare provider. WikiDoc is intended to be an educational tool, not a tool for any form of healthcare delivery. The educational content on WikiDoc drug pages is based upon the FDA package insert, National Library of Medicine content and practice guidelines / consensus statements. WikiDoc does not promote the administration of any medication or device that is not consistent with its labeling. Please read our full disclaimer here. # Black Box Warning # Overview Protriptyline is a Tricyclic antidepressant that is FDA approved for the {{{indicationType}}} of depression. There is a Black Box Warning for this drug as shown here. Common adverse reactions include hypotension, tachycardia, constipation, xerostomia, dizziness, somnolence, blurred vision. # Adult Indications and Dosage ## FDA-Labeled Indications and Dosage (Adult) Depression - 5 to 40 mg/day PO (divided into 3-4 doses per day); - Up to a max of 60 mg/day in divided doses; ## Off-Label Use and Dosage (Adult) ### Guideline-Supported Use There is limited information about Off-Label Guideline-Supported Use of Protriptyline in adult patients. ### Non–Guideline-Supported Use There is limited information about Off-Label Non–Guideline-Supported Use of Protriptyline in adult patients. # Pediatric Indications and Dosage ## FDA-Labeled Indications and Dosage (Pediatric) Depression - 5 mg PO 3 times a day; if necessary increase gradually - Safety and effectiveness in pediatric patients have not been established ## Off-Label Use and Dosage (Pediatric) ### Guideline-Supported Use There is limited information about Off-Label Guideline-Supported Use of Protriptyline in pediatric patients. ### Non–Guideline-Supported Use - There is limited information about Off-Label Non–Guideline-Supported Use of Protriptyline in pediatric patients. # Contraindications - Contraindicated in patients who have shown prior hypersensitivity to it. - It should not be given concomitantly with a monoamine oxidase inhibiting compound. - Hyperpyretic crises, severe convulsions, and deaths have occurred in patients receiving tricyclic antidepressant and monoamine oxidase inhibiting drugs simultaneously. - When it is desired to substitute protriptyline for a monoamine oxidase inhibitor, a minimum of 14 days should be allowed to elapse after the latter is discontinued. - Protriptyline should then be initiated cautiously with gradual increase in dosage until optimum response is achieved. - Protriptyline is contraindicated in patients taking cisapride because of the possibility of adverse cardiac interactions including prolongation of the QT interval, cardiac arrhythmias and conduction system disturbances. - This drug should not be used during the acute recovery phase following myocardial infarction. # Warnings Clinical Worsening and Suicide Risk - Patients with major depressive disorder (MDD), both adult and pediatric, may experience worsening of their depression and/or the emergence of suicidal ideation and behavior (suicidality) or unusual changes in behavior, whether or not they are taking antidepressant medications, and this risk may persist until significant remission occurs. Suicide is a known risk of depression and certain other psychiatric disorders, and these disorders themselves are the strongest predictors of suicide. There has been a long-standing concern, however, that antidepressants may have a role in inducing worsening of depression and the emergence of suicidality in certain patients during the early phases of treatment. Pooled analyses of short-term placebo-controlled trials of antidepressant drugs (SSRIs and others) showed that these drugs increase the risk of suicidal thinking and behavior (suicidality) in children, adolescents and young adults (aged 18-24) with major depressive disorder (MDD) and other psychiatric disorders. Short-term studies did not show an increase in the risk of suicidality with antidepressants compared to placebo in adults beyond age 24; there was a reduction with antidepressants compared to placebo in adults aged 65 and older. - The pooled analysis of placebo-controlled trials in children and adolescents with MDD, obsessive compulsive disorder (OCD), or other psychiatric disorders including a total of 24 short-term trials of 9 antidepressant drugs in over 4400 patients. The pooled analyses of placebo-controlled trials in adults with MDD or other psychiatric disorders included a total of 295 short-term trials (median duration of 2 months) of 11 antidepressant drugs in over 77,000 patients. There was considerable variation in risk of suicidality among drugs, but a tendency toward an increase in the younger patients for almost all drugs studied. There were differences in absolute risk of suicidality across the different indications, with the highest incidence in MDD. The risk differences (drug vs placebo), however, were relatively stable within age strata and across indications. - No suicides occurred in any of the pediatric trials. There were suicides in the adult trials, but the number was not sufficient to reach any conclusion about drug effect on suicide. - It is unknown whether the suicidality risk extends to longer-term use, i.e., beyond several months. However, there is substantial evidence from placebo-controlled maintenance trials in adults with depression that the use of antidepressants can delay the recurrence of depression. - All patients being treated with antidepressants for any indication should be monitored appropriately and observed closely for clinical worsening, suicidality, and unusual changes in behavior, especially during the initial few months of a course of drug therapy, or at times of dose changes, either increases or decreases. - The following symptoms, anxiety, agitation, panic attacks, insomnia, irritability, hostility, aggressiveness, impulsivity, akathisia (psychomotor restlessness), hypomania, and mania, have been reported in adult and pediatric patients being treated with antidepressants for major depressive disorder as well as for other indications, both psychiatric and nonpsychiatric. Although a causal link between the emergence of such symptoms and either the worsening of depression and/or the emergence of suicidal impulses has not been established, there is concern that such symptoms may represent precursors to emerging suicidality. - Consideration should be given to changing the therapeutic regimen, including possibly discontinuing the medication, in patients whose depression is persistently worse, or who are experiencing emergent suicidality or symptoms that might be precursors to worsening depression or suicidality, especially if these symptoms are severe, abrupt in onset, or were not part of the patient’s presenting symptoms. - If the decision has been made to discontinue treatment, medication should be tapered, as rapidly as is feasible, but with recognition that abrupt discontinuation can be associated with certain symptoms. - Families and caregivers of patients being treated with antidepressants for major depressive disorder or other indications, both psychiatric and nonpsychiatric, should be alerted about the need to monitor patients for the emergence of agitation, irritability, unusual changes in behavior, and the other symptoms described above, as well as the emergence of suicidality, and to report such symptoms immediately to health care providers. Such monitoring should include daily observation by families and caregivers. Prescriptions for protriptyline hydrochloride tablets should be written for the smallest quantity of tablets consistent with good patient management, in order to reduce the risk of overdose. Screening Patients for Bipolar Disorder - A major depressive episode may be the initial presentation of bipolar disorder. It is generally believed (though not established in controlled trials) that treating such an episode with an antidepressant alone may increase the likelihood of precipitation of a mixed/manic episode in patients at risk for bipolar disorder. Whether any of the symptoms described above represent such a conversion is unknown. However, prior to initiating treatment with an antidepressant, patients with depressive symptoms should be adequately screened to determine if they are at risk for bipolar disorder; such screening should include a detailed psychiatric history, including a family history of suicide, bipolar disorder, and depression. It should be noted that protriptyline hydrochloride is not approved for use in treating bipolar depression. - Protriptyline may block the antihypertensive effect of guanethidine or similarly acting compounds. - Protriptyline should be used with caution in patients with a history of seizures, and, because of its autonomic activity, in patients with a tendency to urinary retention, or increased intraocular tension. - Tachycardia and postural hypotension may occur more frequently with protriptyline than with other antidepressant drugs. Protriptyline should be used with caution in elderly patients and patients with cardiovascular disorders; such patients should be observed closely because of the tendency of the drug to produce tachycardia, hypotension, arrhythmias, and prolongation of the conduction time. Myocardial infarction and stroke have occurred with drugs of this class. - On rare occasions, hyperthyroid patients or those receiving thyroid medication may develop arrhythmias when this drug is given. - In patients who may use alcohol excessively, it should be borne in mind that the potentiation may increase the danger inherent in any suicide attempt or overdosage. Usage in Pregnancy - Safe use in pregnancy and lactation has not been established; therefore, use in pregnant women, nursing mothers or women who may become pregnant requires that possible benefits be weighed against possible hazards to mother and child. - In mice, rats, and rabbits, doses about ten times greater than the recommended human doses had no apparent adverse effects on reproduction. General precautions - When protriptyline HCl is used to treat the depressive component of schizophrenia, psychotic symptoms may be aggravated. Likewise, in manic-depressive psychosis, depressed patients may experience a shift toward the manic phase if they are treated with an antidepressant drug. Paranoid delusions, with or without associated hostility, may be exaggerated. In any of these circumstances, it may be advisable to reduce the dose of protriptyline or to use a major tranquilizing drug concurrently. - Symptoms, such as anxiety or agitation, may be aggravated in overactive or agitated patients. - The possibility of suicide in depressed patients remains during treatment and until significant remission occurs. This type of patient should not have access to large quantities of the drug. - Concurrent administration of protriptyline and electroshock therapy may increase the hazards of therapy. Such treatment should be limited to patients for whom it is essential. - Discontinue the drug several days before elective surgery, if possible. - Both elevation and lowering of blood sugar levels have been reported. # Adverse Reactions ## Clinical Trials Experience Central Nervous System Cardiovascular Respiratory Gastrointestinal Psychiatric Hypersensitive Reactions Hematologic Miscellaneous ## Postmarketing Experience There is limited information regarding Protriptyline Postmarketing Experience in the drug label. # Drug Interactions - Anticholinergic agents or sympathomimetic drugs, including epinephrine combined with local anesthetics - When protriptyline is given with anticholinergic agents or sympathomimetic drugs, including epinephrine combined with local anesthetics, close supervision and careful adjustment of dosages are required. - Anticholinergic agents or with neuroleptic drugs - Hyperpyrexia has been reported when tricyclic antidepressants are administered with anticholinergic agents or with neuroleptic drugs, particularly during hot weather. - Cimetidine - Cimetidine is reported to reduce hepatic metabolism of certain tricyclic antidepressants, thereby delaying elimination and increasing steady-state concentrations of these drugs. Clinically significant effects have been reported with the tricyclic antidepressants when used concomitantly with cimetidine. Increases in plasma levels of tricyclic antidepressants, and in the frequency and severity of side-effects, particularly anticholinergic, have been reported when cimetidine was added to the drug regimen. Discontinuation of cimetidine in well-controlled patients receiving tricyclic antidepressants and cimetidine may decrease the plasma levels and efficacy of the antidepressants. - Tramadol hydrochloride - Tricyclic antidepressants may enhance the seizure risk in patients taking ULTRAM (tramadol hydrochloride). - Alcohol interference - Protriptyline may enhance the response to alcohol and the effects of barbiturates and other CNS depressants. - Drugs Metabolized by Cytochrome P450 2D6 - The biochemical activity of the drug metabolizing isozyme cytochrome P450 2D6 (debrisoquine hydroxylase) is reduced in a subset of the Caucasian population (about 7% to 10% of Caucasian are so called “poor metabolizers”); reliable estimates of the prevalence of reduced P450 2D6 isozyme activity among Asian, African, and other populations are not yet available. Poor metabolizers have higher than expected plasma concentrations of tricyclic antidepressants (TCAs) when given usual doses. Depending on the fraction of drug metabolized by P450 2D6, the increase in plasma concentration may be small or quite large (8 fold increase in plasma AUC of the TCA). - In addition, certain drugs inhibit the activity of this isozyme and make normal metabolizers resemble poor metabolizers. An individual who is stable on a given dose of TCA may become abruptly toxic when given one of these inhibiting drugs as concomitant therapy. The drugs that inhibit cytochrome P450 2D6 include some that are not metabolized by the enzyme (quinidine; cimetidine) and many that are substrates for P450 2D6 (many other antidepressants, phenothiazines, and the Type 1C antiarrhythmics, propafenone and flecainide). While all the selective serotonin reuptake inhibitors (SSRls), e.g., fluoxetine, sertraline, and paroxetine, inhibit P450 2D6, they may vary in the extent of inhibition. The extent to which SSRI-TCA interactions may pose clinical problems will depend on the degree of inhibition and the pharmacokinetics of the SSRl involved. Nevertheless, caution is indicated in the coadministration of TCAs with any of the SSRls and also in switching from one class to the other. Of particular importance, sufficient time must elapse before initiating TCA treatment in a patient being withdrawn from fluoxetine, given the long half-life of the parent and active metabolite (at least 5 weeks may be necessary). - Concomitant use of tricyclic antidepressants with drugs that can inhibit cytochrome P450 2D6 may require lower doses than usually prescribed for either the tricyclic antidepressant or the other drug. Furthermore, whenever one of these other drugs is withdrawn from co-therapy, an increased dose of tricyclic antidepressant may be required. It is desirable to monitor TCA plasma levels whenever a TCA is going to be coadministered with another drug known to be an inhibitor of P450 2D6. # Use in Specific Populations ### Pregnancy Pregnancy Category (FDA): - Safe use in pregnancy and lactation has not been established; therefore, use in pregnant women, nursing mothers or women who may become pregnant requires that possible benefits be weighed against possible hazards to mother and child. - In mice, rats, and rabbits, doses about ten times greater than the recommended human doses had no apparent adverse effects on reproduction. Pregnancy Category (AUS): There is no Australian Drug Evaluation Committee (ADEC) guidance on usage of Protriptyline in women who are pregnant. ### Labor and Delivery There is no FDA guidance on use of Protriptyline during labor and delivery. ### Nursing Mothers There is no FDA guidance on the use of Protriptyline in women who are nursing. ### Pediatric Use Safety and effectiveness in the pediatric population have not been established . Anyone considering the use of protriptyline hydrochloride in a child or adolescent must balance the potential risks with the clinical need. ### Geriatic Use Clinical studies of protriptyline did not include sufficient numbers of subjects aged 65 and over to determine whether they respond differently from younger subjects. Other reported clinical experience has not identified differences in responses between the elderly and younger patients. In general, dose selection for an elderly patient should be cautious, usually starting at the low end of the dosing range, reflecting the greater frequency of decreased hepatic, renal, or cardiac function, and of concomitant disease or other drug therapy. ### Gender There is no FDA guidance on the use of Protriptyline with respect to specific gender populations. ### Race There is no FDA guidance on the use of Protriptyline with respect to specific racial populations. ### Renal Impairment There is no FDA guidance on the use of Protriptyline in patients with renal impairment. ### Hepatic Impairment There is no FDA guidance on the use of Protriptyline in patients with hepatic impairment. ### Females of Reproductive Potential and Males There is no FDA guidance on the use of Protriptyline in women of reproductive potentials and males. ### Immunocompromised Patients There is no FDA guidance one the use of Protriptyline in patients who are immunocompromised. # Administration and Monitoring ### Administration - Dosage should be initiated at a low level and increased gradually, noting carefully the clinical response and any evidence of intolerance. - Usual Adult Dosage: Fifteen to 40 mg a day divided into 3 or 4 doses. If necessary, dosage may be increased to 60 mg a day. Dosages above this amount are not recommended. *Increases should be made in the morning dose. - Adolescent and Elderly Patients: In general, lower dosages are recommended for these patients. Five mg 3 times a day may be given initially, and increased gradually if necessary. In elderly patients, the cardiovascular system must be monitored closely if the daily dose exceeds 20 mg. - When satisfactory improvement has been reached, dosage should be reduced to the smallest amount that will maintain relief of symptoms. ### Monitoring - Minor adverse reactions require reduction in dosage. Major adverse reactions or evidence of hypersensitivity require prompt discontinuation of the drug. - The safety and effectiveness of protriptyline in pediatric patients have not been established. # IV Compatibility There is limited information regarding the compatibility of Protriptyline and IV administrations. # Overdosage Deaths may occur from overdosage with this class of drugs. Multiple drug ingestion (including alcohol) is common in deliberate tricyclic antidepressant overdose. As management of overdose is complex and changing, it is recommended that the physician contact a poison control center for current information on treatment. Signs and symptoms of toxicity develop rapidly after tricyclic antidepressant overdose, therefore, hospital monitoring is required as soon as possible. - MANIFESTATIONS - Critical manifestations of overdosage include: cardiac dysrhythmias, severe hypotension, convulsions, and CNS depression, including coma. Changes in the electrocardiogram, particularly in QRS axis or width, are clinically significant indicators of tricyclic antidepressant toxicity. Other signs of overdose may include: confusion, disturbed concentration, transient visual hallucinations, dilated pupils, agitation, hyperactive reflexes, stupor, drowsiness, muscle rigidity, vomiting, hypothermia, hyperpyrexia, or any of the symptoms. - MANAGEMENT - General: Obtain an ECG and immediately initiate cardiac monitoring. Protect the patient’s airway, establish an intravenous line and initiate gastric decontamination. A minimum of six hours of observation with cardiac monitoring and observation for signs of CNS or respiratory depression, hypotension, cardiac dysrhythmias and/or conduction blocks, and seizures is necessary. If signs of toxicity occur at any time during this period, extended monitoring is required. There are case reports of patients succumbing to fatal dysrhythmias late after overdose. These patients had clinical evidence of significant poisoning prior to death and most received inadequate gastrointestinal decontamination. Monitoring of plasma drug levels should not guide management of the patient. - Gastrointestinal: Decontamination All patients suspected of a tricyclic antidepressant overdose should receive gastro-intestinal decontamination. This should include large volume gastric lavage followed by activated charcoal. If consciousness is impaired, the airway should be secured prior to lavage. Emesis is contraindicated. - Cardiovascular: A maximal limb-lead QRS duration of ≥0.10 seconds may be the best indication of the severity of the overdose. Intravenous sodium bicarbonate should be used to maintain the serum pH in the range of 7.45 to 7.55. If the pH response is inadequate, hyperventilation may also be used. Concomitant use of hyperventilation and sodium bicarbonate should be done with extreme caution, with frequent pH monitoring. A pH >7.60 or a pCO2 <20 mmHg is undesirable. Dysrhythmias unresponsive to sodium bicarbonate therapy/hyperventilation may respond to lidocaine, bretylium or phenytoin. Type 1A and 1C antiarrhythmics are generally contraindicated (e.g., quinidine, disopyramide, and procainamide). In rare instances, hemoperfusion may be beneficial in acute refractory cardiovascular instability in patients with acute toxicity. However, hemodialysis, peritoneal dialysis, exchange transfusions, and forced diuresis generally have been reported as ineffective in tricyclic antidepressant poisoning. # Pharmacology ## Mechanism of Action Protriptyline hydrochloride is an antidepressant agent. The mechanism of its antidepressant action in man is not known. It is not a monoamine oxidase inhibitor, and it does not act primarily by stimulation of the central nervous system. Protriptyline has been found in some studies to have a more rapid onset of action than imipramine or amitriptyline. The initial clinical effect may occur within one week. Sedative and tranquilizing properties are lacking. The rate of excretion is slow. ## Structure Protriptyline HCl is N-methyl-5H dibenzo -cycloheptene-5-propanamine hydrochloride. Its molecular formula is C19H21NHCl and its structural formula is: Protriptyline HCl, a dibenzocycloheptene derivative, has a molecular weight of 299.84. It is a white to yellowish powder that is freely soluble in water and soluble in dilute HCl. Protriptyline HCl is supplied as 5 mg or 10 mg film-coated tablets. Inactive ingredients are microcrystalline cellulose, pregelatinized starch, lactose monohydrate, dibasic calcium phosphate, sodium starch glycolate, magnesium stearate, hypromellose, triacetin, polysorbate, titanium dioxide and FD&C yellow 6 aluminum lake. The 10 mg tablet also contains polyethylene glycol and polysorbate 80. ## Pharmacodynamics Metabolic studies indicate that protriptyline is well absorbed from the gastrointestinal tract and is rapidly sequestered in tissues. Relatively low plasma levels are found after administration, and only a small amount of unchanged drug is excreted in the urine of dogs and rabbits. Preliminary studies indicate that demethylation of the secondary amine moiety occurs to a significant extent, and that metabolic transformation probably takes place in the liver. It penetrates the brain rapidly in mice and rats, and moreover that which is present in the brain is almost all unchanged drug. Studies on the disposition of radioactive protriptyline in human test subjects showed significant plasma levels within 2 hours, peaking at 8 to 12 hours, then declining gradually. Urinary excretion studies in the same subjects showed significant amounts of radioactivity in 2 hours. The rate of excretion was slow. Cumulative urinary excretion during 16 days accounted for approximately 50% of the drug. The fecal route of excretion did not seem to be important. Rev. 959/960:00 8/13 ## Pharmacokinetics There is limited information regarding Protriptyline Pharmacokinetics in the drug label. ## Nonclinical Toxicology There is limited information regarding Protriptyline Nonclinical Toxicology in the drug label. # Clinical Studies There is limited information regarding Protriptyline Clinical Studies in the drug label. # How Supplied Protriptyline Hydrochloride Tablets USP, 5 mg are dark orange, round, biconvex, film coated tablets, de-bossed “ɛ 96” on one side, and plain on the other side, available in bottles of 100’s. Protriptyline Hydrochloride Tablets USP, 10 mg are light orange, round, biconvex, film coated tablets, de-bossed “ ɛ 97” on one side, and plain on the other side, available in bottles of 100’s. Dispense in a tight container as defined in the USP. ## Storage Store at 20°-25°C (68°-77°F) . # Images ## Drug Images ## Package and Label Display Panel # Patient Counseling Information Protriptyline Hydrochloride Tablets, USP Antidepressant Medicines, Depression and other Serious Mental Illnesses, and Suicidal Thoughts or Actions Read the Medication Guide that comes with you or your family member’s antidepressant medicine. This Medication Guide is only about the risk of suicidal thoughts and actions with antidepressant medicines. Talk to your, or your family member’s, healthcare provider about: - All risks and benefits of treatment with antidepressant medicines - All treatment choices for depression or other serious mental illness What is the most important information I should know about antidepressant medicines, depression and other serious mental illnesses, and suicidal thoughts or actions? - Antidepressant medicines may increase suicidal thoughts or actions in some children, teenagers, and young adults when the medicine is first started. - Depression and other serious mental illnesses are the most important causes of suicidal thoughts and actions. Some people may have a particularly high risk of having suicidal thoughts or actions. These include people who have (or have a family history of) bipolar illness (also called manic-depressive illness) or suicidal thoughts or actions. How can I watch for and try to prevent suicidal thoughts and actions in myself or a family member? - Pay close attention to any changes, especially sudden changes, in mood, behaviors, thoughts, or feelings. This is very important when an antidepressant medicine is first started or when the dose is changed. - Call the healthcare provider right away to report new or sudden changes in moods, behavior, thoughts, or feelings. - Keep all follow-up visits with the healthcare provider as scheduled. Call the healthcare provider between visits as needed, especially if you have concerns about symptoms. Call a healthcare provider right away if you or your family member has any of the following symptoms, especially if they are new, worse, or worry you: - Thoughts about suicide or dying - Attempts to commit suicide - New or worse depression - New or worse anxiety - Feeling very agitated or restless - Panic attacks - Trouble sleeping (insomnia) - New or worse irritability - Acting aggressive, being angry, or violent - Acting on dangerous impulses - An extreme increase in activity and talking (mania) - Other unusual changes in behavior or mood What else do I need to know about antidepressant medicines? - Never stop an antidepressant medicine without first talking to a healthcare provider. Stopping an antidepressant medicine suddenly can cause other symptoms. - Antidepressants are medicines used to treat depression and other illnesses. It is important to discuss all the risks of treating depression and also the risks of not treating it. Patients and their families or other caregivers should discuss all treatment choices with the healthcare provider, not just the use of antidepressants. - Antidepressant medicines have other side effects. Talk to the healthcare provider about the side effects of the medicine prescribed for you or your family member. - Antidepressant medicines can interact with other medicines. Know all of the medicines that you or your family member takes. Keep a list of all medicines to show the healthcare provider. Do not start new medicines without first checking with your healthcare provider. - Not all antidepressant medicines prescribed for children are FDA approved for use in children. Talk to your child’s healthcare provider for more information. This Medication Guide has been approved by the U.S. Food and Drug Administration for all antidepressants. These are not all the possible side effects. Call your doctor for medical advice about side effects. You may report side effects to FDA at 1-800-FDA-1088. You may also report side effects to Hi-Tech Pharmacal Co., Inc. at 1-800-262-9010. # Precautions with Alcohol Alcohol-Protriptyline interaction has not been established. Talk to your doctor about the effects of taking alcohol with this medication. # Brand Names There is limited information regarding Protriptyline Brand Names in the drug label. # Look-Alike Drug Names There is limited information regarding Protriptyline Look-Alike Drug Names in the drug label. # Drug Shortage Status # Price
Protriptyline Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1]; Associate Editor(s)-in-Chief: Pratik Bahekar, MBBS [2] # Disclaimer WikiDoc MAKES NO GUARANTEE OF VALIDITY. WikiDoc is not a professional health care provider, nor is it a suitable replacement for a licensed healthcare provider. WikiDoc is intended to be an educational tool, not a tool for any form of healthcare delivery. The educational content on WikiDoc drug pages is based upon the FDA package insert, National Library of Medicine content and practice guidelines / consensus statements. WikiDoc does not promote the administration of any medication or device that is not consistent with its labeling. Please read our full disclaimer here. # Black Box Warning # Overview Protriptyline is a Tricyclic antidepressant that is FDA approved for the {{{indicationType}}} of depression. There is a Black Box Warning for this drug as shown here. Common adverse reactions include hypotension, tachycardia, constipation, xerostomia, dizziness, somnolence, blurred vision. # Adult Indications and Dosage ## FDA-Labeled Indications and Dosage (Adult) Depression - 5 to 40 mg/day PO (divided into 3-4 doses per day); - Up to a max of 60 mg/day in divided doses; ## Off-Label Use and Dosage (Adult) ### Guideline-Supported Use There is limited information about Off-Label Guideline-Supported Use of Protriptyline in adult patients. ### Non–Guideline-Supported Use There is limited information about Off-Label Non–Guideline-Supported Use of Protriptyline in adult patients. # Pediatric Indications and Dosage ## FDA-Labeled Indications and Dosage (Pediatric) Depression - 5 mg PO 3 times a day; if necessary increase gradually - Safety and effectiveness in pediatric patients have not been established ## Off-Label Use and Dosage (Pediatric) ### Guideline-Supported Use There is limited information about Off-Label Guideline-Supported Use of Protriptyline in pediatric patients. ### Non–Guideline-Supported Use - There is limited information about Off-Label Non–Guideline-Supported Use of Protriptyline in pediatric patients. # Contraindications - Contraindicated in patients who have shown prior hypersensitivity to it. - It should not be given concomitantly with a monoamine oxidase inhibiting compound. - Hyperpyretic crises, severe convulsions, and deaths have occurred in patients receiving tricyclic antidepressant and monoamine oxidase inhibiting drugs simultaneously. - When it is desired to substitute protriptyline for a monoamine oxidase inhibitor, a minimum of 14 days should be allowed to elapse after the latter is discontinued. - Protriptyline should then be initiated cautiously with gradual increase in dosage until optimum response is achieved. - Protriptyline is contraindicated in patients taking cisapride because of the possibility of adverse cardiac interactions including prolongation of the QT interval, cardiac arrhythmias and conduction system disturbances. - This drug should not be used during the acute recovery phase following myocardial infarction. # Warnings Clinical Worsening and Suicide Risk - Patients with major depressive disorder (MDD), both adult and pediatric, may experience worsening of their depression and/or the emergence of suicidal ideation and behavior (suicidality) or unusual changes in behavior, whether or not they are taking antidepressant medications, and this risk may persist until significant remission occurs. Suicide is a known risk of depression and certain other psychiatric disorders, and these disorders themselves are the strongest predictors of suicide. There has been a long-standing concern, however, that antidepressants may have a role in inducing worsening of depression and the emergence of suicidality in certain patients during the early phases of treatment. Pooled analyses of short-term placebo-controlled trials of antidepressant drugs (SSRIs and others) showed that these drugs increase the risk of suicidal thinking and behavior (suicidality) in children, adolescents and young adults (aged 18-24) with major depressive disorder (MDD) and other psychiatric disorders. Short-term studies did not show an increase in the risk of suicidality with antidepressants compared to placebo in adults beyond age 24; there was a reduction with antidepressants compared to placebo in adults aged 65 and older. - The pooled analysis of placebo-controlled trials in children and adolescents with MDD, obsessive compulsive disorder (OCD), or other psychiatric disorders including a total of 24 short-term trials of 9 antidepressant drugs in over 4400 patients. The pooled analyses of placebo-controlled trials in adults with MDD or other psychiatric disorders included a total of 295 short-term trials (median duration of 2 months) of 11 antidepressant drugs in over 77,000 patients. There was considerable variation in risk of suicidality among drugs, but a tendency toward an increase in the younger patients for almost all drugs studied. There were differences in absolute risk of suicidality across the different indications, with the highest incidence in MDD. The risk differences (drug vs placebo), however, were relatively stable within age strata and across indications. - No suicides occurred in any of the pediatric trials. There were suicides in the adult trials, but the number was not sufficient to reach any conclusion about drug effect on suicide. - It is unknown whether the suicidality risk extends to longer-term use, i.e., beyond several months. However, there is substantial evidence from placebo-controlled maintenance trials in adults with depression that the use of antidepressants can delay the recurrence of depression. - All patients being treated with antidepressants for any indication should be monitored appropriately and observed closely for clinical worsening, suicidality, and unusual changes in behavior, especially during the initial few months of a course of drug therapy, or at times of dose changes, either increases or decreases. - The following symptoms, anxiety, agitation, panic attacks, insomnia, irritability, hostility, aggressiveness, impulsivity, akathisia (psychomotor restlessness), hypomania, and mania, have been reported in adult and pediatric patients being treated with antidepressants for major depressive disorder as well as for other indications, both psychiatric and nonpsychiatric. Although a causal link between the emergence of such symptoms and either the worsening of depression and/or the emergence of suicidal impulses has not been established, there is concern that such symptoms may represent precursors to emerging suicidality. - Consideration should be given to changing the therapeutic regimen, including possibly discontinuing the medication, in patients whose depression is persistently worse, or who are experiencing emergent suicidality or symptoms that might be precursors to worsening depression or suicidality, especially if these symptoms are severe, abrupt in onset, or were not part of the patient’s presenting symptoms. - If the decision has been made to discontinue treatment, medication should be tapered, as rapidly as is feasible, but with recognition that abrupt discontinuation can be associated with certain symptoms. - Families and caregivers of patients being treated with antidepressants for major depressive disorder or other indications, both psychiatric and nonpsychiatric, should be alerted about the need to monitor patients for the emergence of agitation, irritability, unusual changes in behavior, and the other symptoms described above, as well as the emergence of suicidality, and to report such symptoms immediately to health care providers. Such monitoring should include daily observation by families and caregivers. Prescriptions for protriptyline hydrochloride tablets should be written for the smallest quantity of tablets consistent with good patient management, in order to reduce the risk of overdose. Screening Patients for Bipolar Disorder - A major depressive episode may be the initial presentation of bipolar disorder. It is generally believed (though not established in controlled trials) that treating such an episode with an antidepressant alone may increase the likelihood of precipitation of a mixed/manic episode in patients at risk for bipolar disorder. Whether any of the symptoms described above represent such a conversion is unknown. However, prior to initiating treatment with an antidepressant, patients with depressive symptoms should be adequately screened to determine if they are at risk for bipolar disorder; such screening should include a detailed psychiatric history, including a family history of suicide, bipolar disorder, and depression. It should be noted that protriptyline hydrochloride is not approved for use in treating bipolar depression. - Protriptyline may block the antihypertensive effect of guanethidine or similarly acting compounds. - Protriptyline should be used with caution in patients with a history of seizures, and, because of its autonomic activity, in patients with a tendency to urinary retention, or increased intraocular tension. - Tachycardia and postural hypotension may occur more frequently with protriptyline than with other antidepressant drugs. Protriptyline should be used with caution in elderly patients and patients with cardiovascular disorders; such patients should be observed closely because of the tendency of the drug to produce tachycardia, hypotension, arrhythmias, and prolongation of the conduction time. Myocardial infarction and stroke have occurred with drugs of this class. - On rare occasions, hyperthyroid patients or those receiving thyroid medication may develop arrhythmias when this drug is given. - In patients who may use alcohol excessively, it should be borne in mind that the potentiation may increase the danger inherent in any suicide attempt or overdosage. Usage in Pregnancy - Safe use in pregnancy and lactation has not been established; therefore, use in pregnant women, nursing mothers or women who may become pregnant requires that possible benefits be weighed against possible hazards to mother and child. - In mice, rats, and rabbits, doses about ten times greater than the recommended human doses had no apparent adverse effects on reproduction. General precautions - When protriptyline HCl is used to treat the depressive component of schizophrenia, psychotic symptoms may be aggravated. Likewise, in manic-depressive psychosis, depressed patients may experience a shift toward the manic phase if they are treated with an antidepressant drug. Paranoid delusions, with or without associated hostility, may be exaggerated. In any of these circumstances, it may be advisable to reduce the dose of protriptyline or to use a major tranquilizing drug concurrently. - Symptoms, such as anxiety or agitation, may be aggravated in overactive or agitated patients. - The possibility of suicide in depressed patients remains during treatment and until significant remission occurs. This type of patient should not have access to large quantities of the drug. - Concurrent administration of protriptyline and electroshock therapy may increase the hazards of therapy. Such treatment should be limited to patients for whom it is essential. - Discontinue the drug several days before elective surgery, if possible. - Both elevation and lowering of blood sugar levels have been reported. # Adverse Reactions ## Clinical Trials Experience Central Nervous System Cardiovascular Respiratory Gastrointestinal Psychiatric Hypersensitive Reactions Hematologic Miscellaneous ## Postmarketing Experience There is limited information regarding Protriptyline Postmarketing Experience in the drug label. # Drug Interactions - Anticholinergic agents or sympathomimetic drugs, including epinephrine combined with local anesthetics - When protriptyline is given with anticholinergic agents or sympathomimetic drugs, including epinephrine combined with local anesthetics, close supervision and careful adjustment of dosages are required. - Anticholinergic agents or with neuroleptic drugs - Hyperpyrexia has been reported when tricyclic antidepressants are administered with anticholinergic agents or with neuroleptic drugs, particularly during hot weather. - Cimetidine - Cimetidine is reported to reduce hepatic metabolism of certain tricyclic antidepressants, thereby delaying elimination and increasing steady-state concentrations of these drugs. Clinically significant effects have been reported with the tricyclic antidepressants when used concomitantly with cimetidine. Increases in plasma levels of tricyclic antidepressants, and in the frequency and severity of side-effects, particularly anticholinergic, have been reported when cimetidine was added to the drug regimen. Discontinuation of cimetidine in well-controlled patients receiving tricyclic antidepressants and cimetidine may decrease the plasma levels and efficacy of the antidepressants. - Tramadol hydrochloride - Tricyclic antidepressants may enhance the seizure risk in patients taking ULTRAM (tramadol hydrochloride). - Alcohol interference - Protriptyline may enhance the response to alcohol and the effects of barbiturates and other CNS depressants. - Drugs Metabolized by Cytochrome P450 2D6 - The biochemical activity of the drug metabolizing isozyme cytochrome P450 2D6 (debrisoquine hydroxylase) is reduced in a subset of the Caucasian population (about 7% to 10% of Caucasian are so called “poor metabolizers”); reliable estimates of the prevalence of reduced P450 2D6 isozyme activity among Asian, African, and other populations are not yet available. Poor metabolizers have higher than expected plasma concentrations of tricyclic antidepressants (TCAs) when given usual doses. Depending on the fraction of drug metabolized by P450 2D6, the increase in plasma concentration may be small or quite large (8 fold increase in plasma AUC of the TCA). - In addition, certain drugs inhibit the activity of this isozyme and make normal metabolizers resemble poor metabolizers. An individual who is stable on a given dose of TCA may become abruptly toxic when given one of these inhibiting drugs as concomitant therapy. The drugs that inhibit cytochrome P450 2D6 include some that are not metabolized by the enzyme (quinidine; cimetidine) and many that are substrates for P450 2D6 (many other antidepressants, phenothiazines, and the Type 1C antiarrhythmics, propafenone and flecainide). While all the selective serotonin reuptake inhibitors (SSRls), e.g., fluoxetine, sertraline, and paroxetine, inhibit P450 2D6, they may vary in the extent of inhibition. The extent to which SSRI-TCA interactions may pose clinical problems will depend on the degree of inhibition and the pharmacokinetics of the SSRl involved. Nevertheless, caution is indicated in the coadministration of TCAs with any of the SSRls and also in switching from one class to the other. Of particular importance, sufficient time must elapse before initiating TCA treatment in a patient being withdrawn from fluoxetine, given the long half-life of the parent and active metabolite (at least 5 weeks may be necessary). - Concomitant use of tricyclic antidepressants with drugs that can inhibit cytochrome P450 2D6 may require lower doses than usually prescribed for either the tricyclic antidepressant or the other drug. Furthermore, whenever one of these other drugs is withdrawn from co-therapy, an increased dose of tricyclic antidepressant may be required. It is desirable to monitor TCA plasma levels whenever a TCA is going to be coadministered with another drug known to be an inhibitor of P450 2D6. # Use in Specific Populations ### Pregnancy Pregnancy Category (FDA): - Safe use in pregnancy and lactation has not been established; therefore, use in pregnant women, nursing mothers or women who may become pregnant requires that possible benefits be weighed against possible hazards to mother and child. - In mice, rats, and rabbits, doses about ten times greater than the recommended human doses had no apparent adverse effects on reproduction. Pregnancy Category (AUS): There is no Australian Drug Evaluation Committee (ADEC) guidance on usage of Protriptyline in women who are pregnant. ### Labor and Delivery There is no FDA guidance on use of Protriptyline during labor and delivery. ### Nursing Mothers There is no FDA guidance on the use of Protriptyline in women who are nursing. ### Pediatric Use Safety and effectiveness in the pediatric population have not been established . Anyone considering the use of protriptyline hydrochloride in a child or adolescent must balance the potential risks with the clinical need. ### Geriatic Use Clinical studies of protriptyline did not include sufficient numbers of subjects aged 65 and over to determine whether they respond differently from younger subjects. Other reported clinical experience has not identified differences in responses between the elderly and younger patients. In general, dose selection for an elderly patient should be cautious, usually starting at the low end of the dosing range, reflecting the greater frequency of decreased hepatic, renal, or cardiac function, and of concomitant disease or other drug therapy. ### Gender There is no FDA guidance on the use of Protriptyline with respect to specific gender populations. ### Race There is no FDA guidance on the use of Protriptyline with respect to specific racial populations. ### Renal Impairment There is no FDA guidance on the use of Protriptyline in patients with renal impairment. ### Hepatic Impairment There is no FDA guidance on the use of Protriptyline in patients with hepatic impairment. ### Females of Reproductive Potential and Males There is no FDA guidance on the use of Protriptyline in women of reproductive potentials and males. ### Immunocompromised Patients There is no FDA guidance one the use of Protriptyline in patients who are immunocompromised. # Administration and Monitoring ### Administration - Dosage should be initiated at a low level and increased gradually, noting carefully the clinical response and any evidence of intolerance. - Usual Adult Dosage: Fifteen to 40 mg a day divided into 3 or 4 doses. If necessary, dosage may be increased to 60 mg a day. Dosages above this amount are not recommended. *Increases should be made in the morning dose. - Adolescent and Elderly Patients: In general, lower dosages are recommended for these patients. Five mg 3 times a day may be given initially, and increased gradually if necessary. In elderly patients, the cardiovascular system must be monitored closely if the daily dose exceeds 20 mg. - When satisfactory improvement has been reached, dosage should be reduced to the smallest amount that will maintain relief of symptoms. ### Monitoring - Minor adverse reactions require reduction in dosage. Major adverse reactions or evidence of hypersensitivity require prompt discontinuation of the drug. - The safety and effectiveness of protriptyline in pediatric patients have not been established. # IV Compatibility There is limited information regarding the compatibility of Protriptyline and IV administrations. # Overdosage Deaths may occur from overdosage with this class of drugs. Multiple drug ingestion (including alcohol) is common in deliberate tricyclic antidepressant overdose. As management of overdose is complex and changing, it is recommended that the physician contact a poison control center for current information on treatment. Signs and symptoms of toxicity develop rapidly after tricyclic antidepressant overdose, therefore, hospital monitoring is required as soon as possible. - MANIFESTATIONS - Critical manifestations of overdosage include: cardiac dysrhythmias, severe hypotension, convulsions, and CNS depression, including coma. Changes in the electrocardiogram, particularly in QRS axis or width, are clinically significant indicators of tricyclic antidepressant toxicity. Other signs of overdose may include: confusion, disturbed concentration, transient visual hallucinations, dilated pupils, agitation, hyperactive reflexes, stupor, drowsiness, muscle rigidity, vomiting, hypothermia, hyperpyrexia, or any of the symptoms. - MANAGEMENT - General: Obtain an ECG and immediately initiate cardiac monitoring. Protect the patient’s airway, establish an intravenous line and initiate gastric decontamination. A minimum of six hours of observation with cardiac monitoring and observation for signs of CNS or respiratory depression, hypotension, cardiac dysrhythmias and/or conduction blocks, and seizures is necessary. If signs of toxicity occur at any time during this period, extended monitoring is required. There are case reports of patients succumbing to fatal dysrhythmias late after overdose. These patients had clinical evidence of significant poisoning prior to death and most received inadequate gastrointestinal decontamination. Monitoring of plasma drug levels should not guide management of the patient. - Gastrointestinal: Decontamination All patients suspected of a tricyclic antidepressant overdose should receive gastro-intestinal decontamination. This should include large volume gastric lavage followed by activated charcoal. If consciousness is impaired, the airway should be secured prior to lavage. Emesis is contraindicated. - Cardiovascular: A maximal limb-lead QRS duration of ≥0.10 seconds may be the best indication of the severity of the overdose. Intravenous sodium bicarbonate should be used to maintain the serum pH in the range of 7.45 to 7.55. If the pH response is inadequate, hyperventilation may also be used. Concomitant use of hyperventilation and sodium bicarbonate should be done with extreme caution, with frequent pH monitoring. A pH >7.60 or a pCO2 <20 mmHg is undesirable. Dysrhythmias unresponsive to sodium bicarbonate therapy/hyperventilation may respond to lidocaine, bretylium or phenytoin. Type 1A and 1C antiarrhythmics are generally contraindicated (e.g., quinidine, disopyramide, and procainamide). In rare instances, hemoperfusion may be beneficial in acute refractory cardiovascular instability in patients with acute toxicity. However, hemodialysis, peritoneal dialysis, exchange transfusions, and forced diuresis generally have been reported as ineffective in tricyclic antidepressant poisoning. # Pharmacology ## Mechanism of Action Protriptyline hydrochloride is an antidepressant agent. The mechanism of its antidepressant action in man is not known. It is not a monoamine oxidase inhibitor, and it does not act primarily by stimulation of the central nervous system. Protriptyline has been found in some studies to have a more rapid onset of action than imipramine or amitriptyline. The initial clinical effect may occur within one week. Sedative and tranquilizing properties are lacking. The rate of excretion is slow. ## Structure Protriptyline HCl is N-methyl-5H dibenzo [a,d]-cycloheptene-5-propanamine hydrochloride. Its molecular formula is C19H21N•HCl and its structural formula is: Protriptyline HCl, a dibenzocycloheptene derivative, has a molecular weight of 299.84. It is a white to yellowish powder that is freely soluble in water and soluble in dilute HCl. Protriptyline HCl is supplied as 5 mg or 10 mg film-coated tablets. Inactive ingredients are microcrystalline cellulose, pregelatinized starch, lactose monohydrate, dibasic calcium phosphate, sodium starch glycolate, magnesium stearate, hypromellose, triacetin, polysorbate, titanium dioxide and FD&C yellow 6 aluminum lake. The 10 mg tablet also contains polyethylene glycol and polysorbate 80. ## Pharmacodynamics Metabolic studies indicate that protriptyline is well absorbed from the gastrointestinal tract and is rapidly sequestered in tissues. Relatively low plasma levels are found after administration, and only a small amount of unchanged drug is excreted in the urine of dogs and rabbits. Preliminary studies indicate that demethylation of the secondary amine moiety occurs to a significant extent, and that metabolic transformation probably takes place in the liver. It penetrates the brain rapidly in mice and rats, and moreover that which is present in the brain is almost all unchanged drug. Studies on the disposition of radioactive protriptyline in human test subjects showed significant plasma levels within 2 hours, peaking at 8 to 12 hours, then declining gradually. Urinary excretion studies in the same subjects showed significant amounts of radioactivity in 2 hours. The rate of excretion was slow. Cumulative urinary excretion during 16 days accounted for approximately 50% of the drug. The fecal route of excretion did not seem to be important. Rev. 959/960:00 8/13 ## Pharmacokinetics There is limited information regarding Protriptyline Pharmacokinetics in the drug label. ## Nonclinical Toxicology There is limited information regarding Protriptyline Nonclinical Toxicology in the drug label. # Clinical Studies There is limited information regarding Protriptyline Clinical Studies in the drug label. # How Supplied Protriptyline Hydrochloride Tablets USP, 5 mg are dark orange, round, biconvex, film coated tablets, de-bossed “ɛ 96” on one side, and plain on the other side, available in bottles of 100’s. Protriptyline Hydrochloride Tablets USP, 10 mg are light orange, round, biconvex, film coated tablets, de-bossed “ ɛ 97” on one side, and plain on the other side, available in bottles of 100’s. Dispense in a tight container as defined in the USP. ## Storage Store at 20°-25°C (68°-77°F) [See USP Controlled Room Temperature]. # Images ## Drug Images ## Package and Label Display Panel # Patient Counseling Information Protriptyline Hydrochloride Tablets, USP Antidepressant Medicines, Depression and other Serious Mental Illnesses, and Suicidal Thoughts or Actions Read the Medication Guide that comes with you or your family member’s antidepressant medicine. This Medication Guide is only about the risk of suicidal thoughts and actions with antidepressant medicines. Talk to your, or your family member’s, healthcare provider about: - All risks and benefits of treatment with antidepressant medicines - All treatment choices for depression or other serious mental illness What is the most important information I should know about antidepressant medicines, depression and other serious mental illnesses, and suicidal thoughts or actions? - Antidepressant medicines may increase suicidal thoughts or actions in some children, teenagers, and young adults when the medicine is first started. - Depression and other serious mental illnesses are the most important causes of suicidal thoughts and actions. Some people may have a particularly high risk of having suicidal thoughts or actions. These include people who have (or have a family history of) bipolar illness (also called manic-depressive illness) or suicidal thoughts or actions. How can I watch for and try to prevent suicidal thoughts and actions in myself or a family member? - Pay close attention to any changes, especially sudden changes, in mood, behaviors, thoughts, or feelings. This is very important when an antidepressant medicine is first started or when the dose is changed. - Call the healthcare provider right away to report new or sudden changes in moods, behavior, thoughts, or feelings. - Keep all follow-up visits with the healthcare provider as scheduled. Call the healthcare provider between visits as needed, especially if you have concerns about symptoms. Call a healthcare provider right away if you or your family member has any of the following symptoms, especially if they are new, worse, or worry you: - Thoughts about suicide or dying - Attempts to commit suicide - New or worse depression - New or worse anxiety - Feeling very agitated or restless - Panic attacks - Trouble sleeping (insomnia) - New or worse irritability - Acting aggressive, being angry, or violent - Acting on dangerous impulses - An extreme increase in activity and talking (mania) - Other unusual changes in behavior or mood What else do I need to know about antidepressant medicines? - Never stop an antidepressant medicine without first talking to a healthcare provider. Stopping an antidepressant medicine suddenly can cause other symptoms. - Antidepressants are medicines used to treat depression and other illnesses. It is important to discuss all the risks of treating depression and also the risks of not treating it. Patients and their families or other caregivers should discuss all treatment choices with the healthcare provider, not just the use of antidepressants. - Antidepressant medicines have other side effects. Talk to the healthcare provider about the side effects of the medicine prescribed for you or your family member. - Antidepressant medicines can interact with other medicines. Know all of the medicines that you or your family member takes. Keep a list of all medicines to show the healthcare provider. Do not start new medicines without first checking with your healthcare provider. - Not all antidepressant medicines prescribed for children are FDA approved for use in children. Talk to your child’s healthcare provider for more information. This Medication Guide has been approved by the U.S. Food and Drug Administration for all antidepressants. These are not all the possible side effects. Call your doctor for medical advice about side effects. You may report side effects to FDA at 1-800-FDA-1088. You may also report side effects to Hi-Tech Pharmacal Co., Inc. at 1-800-262-9010. # Precautions with Alcohol Alcohol-Protriptyline interaction has not been established. Talk to your doctor about the effects of taking alcohol with this medication. # Brand Names There is limited information regarding Protriptyline Brand Names in the drug label. # Look-Alike Drug Names There is limited information regarding Protriptyline Look-Alike Drug Names in the drug label. # Drug Shortage Status # Price
https://www.wikidoc.org/index.php/Protriptyline
361a2f3fc77cf5245aba82b84894eb53d7ec669a
wikidoc
Proventil HFA
Proventil HFA Synonyms / Brand Names: # Dosing and Administration General treatment For treatment of acute episodes of bronchospasm or prevention of asthmatic symptoms, the usual dosage for adults and children 4 years of age and older is two inhalations repeated every 4 to 6 hours. More frequent administration or a larger number of inhalations is not recommended. In some patients, one inhalation every 4 hours may be sufficient. Each actuation of Proventil HFA Inhalation Aerosol delivers 108 mcg of albuterol sulfate (equivalent to 90 mcg of albuterol base) from the mouthpiece. It is recommended to prime the inhaler before using for the first time and in cases where the inhaler has not been used for more than 2 weeks by releasing four “test sprays” into the air, away from the face. For more information on dosing please refer to Instructions for administration FDA Package Insert Resources Indications, Contraindications, Side Effects, Drug Interactions, etc. Calculate Creatine Clearance On line calculator of your patients Cr Cl by a variety of formulas. Convert pounds to Kilograms On line calculator of your patients weight in pounds to Kg for dosing estimates. Publication Resources Recent articles, WikiDoc State of the Art Review, Textbook Information Trial Resources Ongoing Trials, Trial Results Guidelines & Evidence Based Medicine Resources US National Guidelines, Cochrane Collaboration, etc. Media Resources Slides, Video, Images, MP3, Podcasts, etc. Patient Resources Discussion Groups, Handouts, Blogs, News, etc. International Resources en Español # FDA Package Insert Resources Indications Contraindications Side Effects Drug Interactions Precautions Overdose Instructions for Administration How Supplied FDA label FDA on Proventil HFA Return to top # Publication Resources Most Recent Articles on Proventil HFA Review Articles on Proventil HFA Articles on Proventil HFA in N Eng J Med, Lancet, BMJ Textbook Information on Proventil HFA Return to top # Trial Resources Ongoing Trials with Proventil HFA at Clinical Trials.gov Trial Results with Proventil HFA Return to top # Guidelines & Evidence Based Medicine Resources US National Guidelines Clearinghouse on Proventil HFA Cochrane Collaboration on Proventil HFA Cost Effectiveness of Proventil HFA Return to top # Media Resources Powerpoint Slides on Proventil HFA Images of Proventil HFA Podcasts & MP3s on Proventil HFA Videos on Proventil HFA Return to top # Patient Resources Patient Information from National Library of Medicine Patient Resources on Proventil HFA Discussion Groups on Proventil HFA Patient Handouts on Proventil HFA Blogs on Proventil HFA Proventil HFA in the News Proventil HFA in the Marketplace Return to top # International Resources Proventil HFA en Español Return to top Adapted from the FDA Package Insert.
Proventil HFA Synonyms / Brand Names: Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1] # Dosing and Administration General treatment For treatment of acute episodes of bronchospasm or prevention of asthmatic symptoms, the usual dosage for adults and children 4 years of age and older is two inhalations repeated every 4 to 6 hours. More frequent administration or a larger number of inhalations is not recommended. In some patients, one inhalation every 4 hours may be sufficient. Each actuation of Proventil HFA Inhalation Aerosol delivers 108 mcg of albuterol sulfate (equivalent to 90 mcg of albuterol base) from the mouthpiece. It is recommended to prime the inhaler before using for the first time and in cases where the inhaler has not been used for more than 2 weeks by releasing four “test sprays” into the air, away from the face. For more information on dosing please refer to Instructions for administration FDA Package Insert Resources Indications, Contraindications, Side Effects, Drug Interactions, etc. Calculate Creatine Clearance On line calculator of your patients Cr Cl by a variety of formulas. Convert pounds to Kilograms On line calculator of your patients weight in pounds to Kg for dosing estimates. Publication Resources Recent articles, WikiDoc State of the Art Review, Textbook Information Trial Resources Ongoing Trials, Trial Results Guidelines & Evidence Based Medicine Resources US National Guidelines, Cochrane Collaboration, etc. Media Resources Slides, Video, Images, MP3, Podcasts, etc. Patient Resources Discussion Groups, Handouts, Blogs, News, etc. International Resources en Español # FDA Package Insert Resources Indications Contraindications Side Effects Drug Interactions Precautions Overdose Instructions for Administration How Supplied FDA label FDA on Proventil HFA Return to top # Publication Resources Most Recent Articles on Proventil HFA Review Articles on Proventil HFA Articles on Proventil HFA in N Eng J Med, Lancet, BMJ Textbook Information on Proventil HFA Return to top # Trial Resources Ongoing Trials with Proventil HFA at Clinical Trials.gov Trial Results with Proventil HFA Return to top # Guidelines & Evidence Based Medicine Resources US National Guidelines Clearinghouse on Proventil HFA Cochrane Collaboration on Proventil HFA Cost Effectiveness of Proventil HFA Return to top # Media Resources Powerpoint Slides on Proventil HFA Images of Proventil HFA Podcasts & MP3s on Proventil HFA Videos on Proventil HFA Return to top # Patient Resources Patient Information from National Library of Medicine Patient Resources on Proventil HFA Discussion Groups on Proventil HFA Patient Handouts on Proventil HFA Blogs on Proventil HFA Proventil HFA in the News Proventil HFA in the Marketplace Return to top # International Resources Proventil HFA en Español Return to top Adapted from the FDA Package Insert.
https://www.wikidoc.org/index.php/Proventil_HFA
a1d849bf2410ec37a8897f958ef633f1d6ad9aec
wikidoc
Prulifloxacin
Prulifloxacin # Overview Prulifloxacin is an older synthetic antibiotic of the fluoroquinolone drug class undergoing clinical trials prior to a possible NDA (New Drug Application) submission to the U.S. Food and Drug Administration (FDA). It is a prodrug which is metabolized in the body to the active compound ulifloxacin. It was developed over two decades ago by Nippon Shinyaku Co. and was patented in Japan in 1987 and in the United States in 1989. It has been approved for the treatment of uncomplicated and complicated urinary tract infections, community-acquired respiratory tract infections in Italy and gastroenteritis, including infectious diarrheas, in Japan. Prulifloxacin has not been approved for use in the United States. # History In 1987 a European Patent (EP 315828) for prulifloxacin was issued to the Japanese based pharmaceutical company, Nippon Shinyaku Co., Ltd (Nippon). Ten years after the issuance of the European patent, marketing approval was applied for and granted in Japan (March 1997). Subsequent to being approved by the Japanese authorities in 1997 prulifloxacin was co-marketed and jointly developed in Japan with Meiji Seika as licensee (Sword). In more recent times, Angelini ACRAF SpA, under license from Nippon Shinyaku, has fully developed prulifloxacin, for the European market. Angelini is the licensee for the product in Italy. Following its launch in Italy, Angelini launched prulifloxacin in Portugal (January 2007) and it has been stated that further approvals will be sought in other European countries. Prulifloxacin is marketed in Japan and Italy as Quisnon (Nippon Shinyaku); Sword (Meiji); Unidrox (Angelini); Prixina (Angelini) and Glimbax (ITF Hellas) in Greece and generic as Pruquin. In 1989 and 1992 United States patents (US 5086049) were issued to Nippon Shinyaku for prulifloxacin. It was not until June 2004, when Optimer Pharmaceuticals acquired exclusive rights to discover, develop and commercialize prulifloxacin (Pruvel) in the U.S. from Nippon Shinyaku Co., Ltd., that there were any attempts to seek FDA approval to market the drug in the United States. Optimer Pharmaceuticals expects to file an NDA (new drug application) for prulifloxacin some time in 2010. As the patent for prulifloxacin has already expired, Optimer Pharmaceuticals has stated that this may have an effect on the commercial prospects of prulifloxacin within the United States market. # Licensed uses Prulifloxacin has been approved in Italy, Japan,China,India and Greece (as indicated), for treatment of infections caused by susceptible bacteria, in the following conditions: - Acute uncomplicated lower urinary tract infections (simple cystitis) - Complicated lower urinary tract infections - Acute exacerbation of chronic bronchitis - Gastroenteritis, including infectious diarrheas - Prulifloxacin has not been approved for use in the United States, but may have been approved in other Countries, other than that which is indicated above. # Availability Prulifloxacin is available as: - Tablets (250 mg, 450 mg or 600 mg) In most countries, all formulations require a prescription. # Mechanism of action Like other fluoroquinolones, Prulifloxacin prevents bacterial DNA replication, transcription, repair and recombination through inhibition of bacterial DNA gyrase. Quinolones and fluoroquinolones are bactericidal drugs, eradicating bacteria by interfering with DNA replication. Quinolones are synthetic agents that have a broad spectrum of antimicrobial activity as well as a unique mechanism of action, resulting in inhibition of bacterial DNA gyrase and topoisomerase IV. Quinolones inhibit the bacterial DNA gyrase or the topoisomerase IV enzyme, thereby inhibiting DNA replication and transcription. For many gram-negative bacteria, DNA gyrase is the target, whereas topoisomerase IV is the target for many gram-positive bacteria. It is believed that eukaryotic cells do not contain DNA gyrase or topoisomerase IV. # Contraindications There are only four contraindications found within the package insert: - "Prulifloxacin is contraindicated in patients with anamnesis of tendon diseases related to the administration of quinolones." - "Prulifloxacin is contraindicated in persons with a history of hypersensitivity to Prulifloxacin, any member of the quinolone class of antimicrobial agents, or any of the product components." - "Prulifloxacin is contraindicated in subjects with celiac disease."' - "Prulifloxacin is also considered to be contraindicated within the pediatric population, pregnancy, nursing mothers, and in patients with epilepsy or other seizure disorders." - Pregnancy The fluoroquinolones rapidly cross the blood-placenta and blood-milk barrier, and are extensively distributed into the fetal tissues. The fluoroquinolones have also been reported as being present in the mother’s milk and are passed on to the nursing child. - Pediatric population Fluoroquinolones are not licensed by the U.S. FDA for use in children due to the risk of permanent injury to the musculoskeletal system, with two exceptions. However, the fluoroquinolones are licensed to treat lower respiratory infections in children with cystic fibrosis in the UK. # Special precautions "As with other quinolones, exposure to the sun or ultra-violet rays may cause phototoxicity reactions in patients treated with prulifloxacin." "When treated with antibacterial agents of the quinolone group, patients with latent or known deficiencies for the glucose-6-phosphate dehydrogenase activity are predisposed to hemolytic reactions." # Adverse Events Within one review prulifloxacin was stated to have a similar tolerability profile to that of ciprofloxacin. Within another study it was found that prulifloxacin patients experienced a similar number of adverse reactions compared to those in the ciprofloxacin group (15.4% vs 12.7%). There were four serious adverse events in each treatment arm, including 1 death in the prulifloxacin arm. None were considered treatment related by the investigator. If approved in the U.S., prulifloxacin will likely carry a black box warning for tendon damage, as the FDA has determined that this is a class effect of fluoroquinolones. Prulifloxacin has a reduced effect on the QTc interval comparted to other fluoroquinolones and may be a safer choice for patients with pre-existing risk factors for arrhythmia. # Interactions - Probenecid: Prulifloxacin urinary excretion decreases when concomitantly administered with probenecid. - Fenbufen: The concomitant administration of fenbufen can cause increased risk of convulsions. - Hypoglycemic agents: May cause hypoglycemia in diabetic patients under treatment with hypoglycemic agents. - Theophylline: May cause a decreased theophylline clearance. - Warfarin: May enhance the effects of oral anticoagulants such as warfarin and its derivatives. - Nicardipine: May potentiate the phototoxicity of prulifloxacin. # Overdose In the event of acute overdosage, the stomach should be emptied by inducing vomiting or by gastric lavage; the patient should be carefully observed and given supportive treatment. # Pharmacokinetics Prulifloxacin 600 mg achieves peak plasma concentration (Cmax) of ulifloxacin (1.6μg/mL) in a median time to Cmax (tmax) of 1 hour. Ulifloxacin is ≈45% bound to serum proteins in vivo. It is extensively distributed throughout tissues and shows good penetration into many body tissues. The elimination half-life (t1/2) of ulifloxacin after single-dose prulifloxacin 300–600 mg ranged from 10.6 to 12.1 hours. After absorption from the gastrointestinal tract, prulifloxacin undergoes extensive first-pass metabolism (hydrolysis by esterases, mainly paraoxonase to form ulifloxacin, the active metabolite). Unchanged ulifloxacin is predominantly eliminated by renal excretion. Quoting from the available package insert.
Prulifloxacin Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1] # Overview Prulifloxacin is an older synthetic antibiotic of the fluoroquinolone drug class[1][2] undergoing clinical trials prior to a possible NDA (New Drug Application) submission to the U.S. Food and Drug Administration (FDA). It is a prodrug which is metabolized in the body to the active compound ulifloxacin.[3][4] It was developed over two decades ago by Nippon Shinyaku Co. and was patented in Japan in 1987 and in the United States in 1989.[5][6] It has been approved for the treatment of uncomplicated and complicated urinary tract infections, community-acquired respiratory tract infections in Italy and gastroenteritis, including infectious diarrheas, in Japan.[3][7] Prulifloxacin has not been approved for use in the United States. # History In 1987 a European Patent (EP 315828) for prulifloxacin was issued to the Japanese based pharmaceutical company, Nippon Shinyaku Co., Ltd (Nippon). Ten years after the issuance of the European patent, marketing approval was applied for and granted in Japan (March 1997). Subsequent to being approved by the Japanese authorities in 1997 prulifloxacin was co-marketed and jointly developed in Japan with Meiji Seika as licensee (Sword).[6] In more recent times, Angelini ACRAF SpA, under license from Nippon Shinyaku, has fully developed prulifloxacin, for the European market.[8] Angelini is the licensee for the product in Italy. Following its launch in Italy, Angelini launched prulifloxacin in Portugal (January 2007) and it has been stated that further approvals will be sought in other European countries.[9][10] Prulifloxacin is marketed in Japan and Italy as Quisnon (Nippon Shinyaku); Sword (Meiji); Unidrox (Angelini); Prixina (Angelini) and Glimbax (ITF Hellas) in Greece and generic as Pruquin. In 1989 and 1992 United States patents (US 5086049) were issued to Nippon Shinyaku for prulifloxacin. It was not until June 2004, when Optimer Pharmaceuticals acquired exclusive rights to discover, develop and commercialize prulifloxacin (Pruvel) in the U.S. from Nippon Shinyaku Co., Ltd., that there were any attempts to seek FDA approval to market the drug in the United States. Optimer Pharmaceuticals expects to file an NDA (new drug application) for prulifloxacin some time in 2010. As the patent for prulifloxacin has already expired, Optimer Pharmaceuticals has stated that this may have an effect on the commercial prospects of prulifloxacin within the United States market.[11] # Licensed uses Prulifloxacin has been approved in Italy, Japan,China,India and Greece (as indicated), for treatment of infections caused by susceptible bacteria, in the following conditions: - Acute uncomplicated lower urinary tract infections (simple cystitis) - Complicated lower urinary tract infections - Acute exacerbation of chronic bronchitis - Gastroenteritis, including infectious diarrheas - Prulifloxacin has not been approved for use in the United States, but may have been approved in other Countries, other than that which is indicated above. # Availability Prulifloxacin is available as: - Tablets (250 mg, 450 mg or 600 mg) In most countries, all formulations require a prescription. # Mechanism of action Like other fluoroquinolones, Prulifloxacin prevents bacterial DNA replication, transcription, repair and recombination through inhibition of bacterial DNA gyrase. Quinolones and fluoroquinolones are bactericidal drugs, eradicating bacteria by interfering with DNA replication. Quinolones are synthetic agents that have a broad spectrum of antimicrobial activity as well as a unique mechanism of action, resulting in inhibition of bacterial DNA gyrase and topoisomerase IV. Quinolones inhibit the bacterial DNA gyrase or the topoisomerase IV enzyme, thereby inhibiting DNA replication and transcription. For many gram-negative bacteria, DNA gyrase is the target, whereas topoisomerase IV is the target for many gram-positive bacteria. It is believed that eukaryotic cells do not contain DNA gyrase or topoisomerase IV. # Contraindications There are only four contraindications found within the package insert: [12] - "Prulifloxacin is contraindicated in patients with anamnesis of tendon diseases related to the administration of quinolones." - "Prulifloxacin is contraindicated in persons with a history of hypersensitivity to Prulifloxacin, any member of the quinolone class of antimicrobial agents, or any of the product components." - "Prulifloxacin is contraindicated in subjects with celiac disease."' - "Prulifloxacin is also considered to be contraindicated within the pediatric population, pregnancy, nursing mothers, and in patients with epilepsy or other seizure disorders." - Pregnancy The fluoroquinolones rapidly cross the blood-placenta and blood-milk barrier, and are extensively distributed into the fetal tissues. The fluoroquinolones have also been reported as being present in the mother’s milk and are passed on to the nursing child.[13][14] - Pediatric population Fluoroquinolones are not licensed by the U.S. FDA for use in children due to the risk of permanent injury to the musculoskeletal system, with two exceptions. However, the fluoroquinolones are licensed to treat lower respiratory infections in children with cystic fibrosis in the UK. # Special precautions "As with other quinolones, exposure to the sun or ultra-violet rays may cause phototoxicity reactions in patients treated with prulifloxacin."[12] "When treated with antibacterial agents of the quinolone group, patients with latent or known deficiencies for the glucose-6-phosphate dehydrogenase activity are predisposed to hemolytic reactions."[12] # Adverse Events Within one review prulifloxacin was stated to have a similar tolerability profile to that of ciprofloxacin.[15] Within another study it was found that prulifloxacin patients experienced a similar number of adverse reactions compared to those in the ciprofloxacin group (15.4% vs 12.7%). There were four serious adverse events in each treatment arm, including 1 death in the prulifloxacin arm. None were considered treatment related by the investigator.[16] If approved in the U.S., prulifloxacin will likely carry a black box warning for tendon damage, as the FDA has determined that this is a class effect of fluoroquinolones.[17] Prulifloxacin has a reduced effect on the QTc interval comparted to other fluoroquinolones and may be a safer choice for patients with pre-existing risk factors for arrhythmia.[18][19] # Interactions - Probenecid: Prulifloxacin urinary excretion decreases when concomitantly administered with probenecid.[12] - Fenbufen: The concomitant administration of fenbufen can cause increased risk of convulsions.[12] - Hypoglycemic agents: May cause hypoglycemia in diabetic patients under treatment with hypoglycemic agents.[12] - Theophylline: May cause a decreased theophylline clearance.[12] - Warfarin: May enhance the effects of oral anticoagulants such as warfarin and its derivatives.[12] - Nicardipine: May potentiate the phototoxicity of prulifloxacin.[12] # Overdose In the event of acute overdosage, the stomach should be emptied by inducing vomiting or by gastric lavage; the patient should be carefully observed and given supportive treatment.[12] # Pharmacokinetics Prulifloxacin 600 mg achieves peak plasma concentration (Cmax) of ulifloxacin (1.6μg/mL) in a median time to Cmax (tmax) of 1 hour. Ulifloxacin is ≈45% bound to serum proteins in vivo. It is extensively distributed throughout tissues and shows good penetration into many body tissues. The elimination half-life (t1/2) of ulifloxacin after single-dose prulifloxacin 300–600 mg ranged from 10.6 to 12.1 hours. After absorption from the gastrointestinal tract, prulifloxacin undergoes extensive first-pass metabolism (hydrolysis by esterases, mainly paraoxonase to form ulifloxacin, the active metabolite). Unchanged ulifloxacin is predominantly eliminated by renal excretion. Quoting from the available package insert.[12]
https://www.wikidoc.org/index.php/Prulifloxacin
146e04a25eb22c7ff13d048d3eeb0b92767e7dbd
wikidoc
Psychasthenia
Psychasthenia # Overview A psychological disorder characterized by phobias, obsessions, compulsions, or excessive anxiety. The term is no longer in psychiatric diagnostic use, although it still forms one of the ten clinical subscales of the popular self-report personality inventories MMPI-I and MMPI-II. # Presentation The MMPI subscale 7 describes psychasthenia as akin to obsessive-compulsive disorder, and as characterised by excessive doubts, compulsions, obsessions, and unreasonable fears. The psychasthenic has an inability to resist specific actions or thoughts, regardless of their maladaptive nature. In addition to obsessive-compulsive features, the scale taps abnormal fears, self-criticism, difficulties in concentration, and guilt feelings. The scale assesses long-term (trait) anxiety, although it is somewhat responsive to situational stress as well. The psychasthenic has insufficient control over their conscious thinking and memory, sometimes wandering aimlessly and/or forgetting what they were doing. Thoughts can be scattered and take significant effort to organize, often resulting in sentences that don't come out as intended, therefore making little sense to others. The constant mental effort and characteristic insomnia induces fatigue, which worsens the condition. Symptoms can possibly be greatly reduced with concentration exercises and therapy, depending on whether the condition is psychological or biological. # History The term psychasthenia is historically associated primarily with the work of Pierre Janet, who divided the neuroses into the psychasthenias and the hysterias, discarding the term neurasthenia since it implied a neurological theory where none existed. Whereas the hysterias involved at their source a narrowing of the field of consciousness, the psychasthenias involved at root a disturbance in the fonction du reél ('function of reality'), a kind of weakness in the ability to attend to, adjust to, and synthesise one's changing experience (cf. executive function in today's empiricist psychologies). Carl Jung later made the hysteric and the psychasthenic states the prototypes of what he described as introverted and extroverted personalities. Karl Jaspers preserves the term 'neurasthenia', defining it in terms of 'irritable weakness' and describing phenomena such as irritability, sensitivity, a painful sensibility, abnormal responsiveness to stimuli, bodily pains, strong experience of fatigue, etc. This is constrasted with psychasthenia which, following Janet, he describes as a variety of phenomena 'held together by the theoretical concept of a 'diminution of psychic energy'.' The psychasthenic person prefers to 'withdraw from his fellows and not be exposed to situations in which his abnormally strong 'complexes' rob him of presence of mind, memory and poise.' The psychasthenic lacks confidence, is prone to obsessional thoughts, unfounded fears, self-scrutiny and indecision. This state in turn promotes withdrawal from the world and daydreaming, yet this only makes things worse. 'The psyche generally lacks an ability to integrate its life or to work through and manage its various experiences; it fails to build up its personality and make any steady development.' Jaspers believed that some of Janet's more extreme cases of psychasthenia were cases of schizophrenia. # Notes and references - ↑ American Heritage Dictionary - ↑ Ellenberger (1970), p. 375; Janet (1903) - ↑ Ellenberger (1970), p. 377 - ↑ Jaspers (1963), pp.441-443 # Further reading - Jaspers, Karl (1990). General Psychopathology (7th edition ed.). Manchester: Manchester University Press. ISBN 0-7190-0236-2.CS1 maint: Extra text (link) .mw-parser-output cite.citation{font-style:inherit}.mw-parser-output q{quotes:"\"""\"""'""'"}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-lock-free a{background:url("")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration a{background:url("")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-subscription a{background:url("")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{display:none;font-size:100%}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em} - Janet, Pierre (1903). Les Obsessions et la Psychasthénie. Paris: Alcan. - Ellenberger, Henri (1970). The Discovery of the Unconscious. Basic Books. ISBN 0-465-01672-3. nl:Psychasthenie sv:Psykasteni
Psychasthenia Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1] # Overview A psychological disorder characterized by phobias, obsessions, compulsions, or excessive anxiety[1]. The term is no longer in psychiatric diagnostic use, although it still forms one of the ten clinical subscales of the popular self-report personality inventories MMPI-I and MMPI-II. # Presentation The MMPI subscale 7 describes psychasthenia as akin to obsessive-compulsive disorder, and as characterised by excessive doubts, compulsions, obsessions, and unreasonable fears. The psychasthenic has an inability to resist specific actions or thoughts, regardless of their maladaptive nature. In addition to obsessive-compulsive features, the scale taps abnormal fears, self-criticism, difficulties in concentration, and guilt feelings. The scale assesses long-term (trait) anxiety, although it is somewhat responsive to situational stress as well. The psychasthenic has insufficient control over their conscious thinking and memory, sometimes wandering aimlessly and/or forgetting what they were doing. Thoughts can be scattered and take significant effort to organize, often resulting in sentences that don't come out as intended, therefore making little sense to others. The constant mental effort and characteristic insomnia induces fatigue, which worsens the condition. Symptoms can possibly be greatly reduced with concentration exercises and therapy, depending on whether the condition is psychological or biological.[citation needed] # History The term psychasthenia is historically associated primarily with the work of Pierre Janet, who divided the neuroses into the psychasthenias and the hysterias, discarding the term neurasthenia since it implied a neurological theory where none existed.[2] Whereas the hysterias involved at their source a narrowing of the field of consciousness, the psychasthenias involved at root a disturbance in the fonction du reél ('function of reality'), a kind of weakness in the ability to attend to, adjust to, and synthesise one's changing experience (cf. executive function in today's empiricist psychologies). Carl Jung later made the hysteric and the psychasthenic states the prototypes of what he described as introverted and extroverted personalities.[3] Karl Jaspers preserves the term 'neurasthenia', defining it in terms of 'irritable weakness' and describing phenomena such as irritability, sensitivity, a painful sensibility, abnormal responsiveness to stimuli, bodily pains, strong experience of fatigue, etc. This is constrasted with psychasthenia which, following Janet, he describes as a variety of phenomena 'held together by the theoretical concept of a 'diminution of psychic energy'.' The psychasthenic person prefers to 'withdraw from his fellows and not be exposed to situations in which his abnormally strong 'complexes' rob him of presence of mind, memory and poise.' The psychasthenic lacks confidence, is prone to obsessional thoughts, unfounded fears, self-scrutiny and indecision. This state in turn promotes withdrawal from the world and daydreaming, yet this only makes things worse. 'The psyche generally lacks an ability to integrate its life or to work through and manage its various experiences; it fails to build up its personality and make any steady development.' Jaspers believed that some of Janet's more extreme cases of psychasthenia were cases of schizophrenia.[4] # Notes and references - ↑ American Heritage Dictionary - ↑ Ellenberger (1970), p. 375; Janet (1903) - ↑ Ellenberger (1970), p. 377 - ↑ Jaspers (1963), pp.441-443 # Further reading - Jaspers, Karl (1990). General Psychopathology (7th edition ed.). Manchester: Manchester University Press. ISBN 0-7190-0236-2.CS1 maint: Extra text (link) .mw-parser-output cite.citation{font-style:inherit}.mw-parser-output q{quotes:"\"""\"""'""'"}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-lock-free a{background:url("https://upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration a{background:url("https://upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-subscription a{background:url("https://upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{display:none;font-size:100%}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em} - Janet, Pierre (1903). Les Obsessions et la Psychasthénie. Paris: Alcan. - Ellenberger, Henri (1970). The Discovery of the Unconscious. Basic Books. ISBN 0-465-01672-3. nl:Psychasthenie sv:Psykasteni Template:WH Template:WS
https://www.wikidoc.org/index.php/Psychasthenia
398e314d1b28e28ba332426c62cfb230d6301291
wikidoc
Psychometrics
Psychometrics Psychometrics is the field of study concerned with the theory and technique of educational and psychological measurement, which includes the measurement of knowledge, abilities, attitudes, and personality traits. The field is primarily concerned with the study of differences between individuals and between groups of individuals. It involves two major research tasks, namely: (i) the construction of instruments and procedures for measurement; and (ii) the development and refinement of theoretical approaches to measurement. # Origins and background Much of the early theoretical and applied work in psychometrics was undertaken in an attempt to measure intelligence. Francis Galton is often referred to as the father of psychometrics, having devised and used mental tests. However, the origin of psychometrics also has connections to the related field of psychophysics. Charles Spearman, a pioneer in psychometrics who developed approaches to the measurement of intelligence, studied under Wilhelm Wundt and was trained in psychophysics. The psychometrician L. L. Thurstone later developed and applied a theoretical approach to the measurement referred to as the law of comparative judgment, an approach which has close connections to the psychophysical theory developed by Ernst Heinrich Weber and Gustav Fechner. In addition, Spearman and Thurstone both made important contributions to the theory and application of factor analysis, a statistical method that has been developed and used extensively in psychometrics. More recently, psychometric theory has been applied in the measurement of personality, attitudes and beliefs, academic achievement, and in health-related fields. Measurement of these unobservable phenomena is difficult, and much of the research and accumulated art in this discipline has been developed in an attempt to properly define and quantify such phenomena. Critics, including practitioners in the physical sciences and social activists, have argued that such definition and quantification is impossibly difficult, and that such measurements are often misused. Proponents of psychometric techniques can reply, though, that their critics often misuse data by not applying psychometric criteria, and also that various quantitative phenomena in the physical sciences, such as heat and forces, cannot be observed directly but must be inferred from their manifestations. Figures who made significant contributions to psychometrics include Karl Pearson, L. L. Thurstone, Georg Rasch, Johnson O'Connor, Frederick M. Lord and Arthur Jensen. # Definition of measurement in the social sciences The definition of measurement in the social sciences has a long history. A currently widespread definition, proposed by Stanley Smith Stevens (1946), is that measurement is "the assignment of numerals to objects or events according to some rule". This definition was introduced in the paper in which Stevens proposed four levels of measurement. Although widely adopted, this definition differs in important respects from the more classical definition of measurement adopted throughout the physical sciences, which is that measurement is the numerical estimation and expression of the magnitude of one quantity relative to another (Michell, 1997). Indeed, Stevens' definition of measurement was put forward in response to the British Ferguson Committee, whose chair, A. Ferguson, was a physicist. The committee was appointed in 1932 by the British Association for the Advancement of Science to investigate the possibility of quantitatively estimating sensory events. Although its chair and other members were physicists, the committee also comprised several psychologists. The committee's report highlighted the importance of the definition of measurement. While Stevens' response was to propose a new definition, which has had considerable influence in the field, this was by no means the only response to the report. Another, notably different, response was to accept the classical definition, as reflected in the following statement: These divergent responses are reflected to a large extent within alternative approaches to measurement. For example, methods based on covariance matrices are typically employed on the premise that numbers, such as raw scores derived from assessments, are measurements. Such approaches implicitly entail Stevens' definition of measurement, which requires only that numbers are assigned according to some rule. The main research task, then, is generally considered to be the discovery of associations between scores, and of factors posited to underlie such associations. On the other hand, when measurement models such as the Rasch model are employed, numbers are not assigned based on a rule. Instead, in keeping with Reese's statement above, specific criteria for measurement are stated, and the objective is to construct procedures or operations that provide data which meet the relevant criteria. Measurements are estimated based on the models, and tests are conducted to ascertain whether it has been possible to meet the relevant criteria. # Instruments and procedures The first psychometric instruments were designed to measure the concept of intelligence. The best known historical approach involves the Stanford-Binet IQ test, developed originally by the French Psychologist Alfred Binet. Contrary to a fairly widespread misconception, there is no compelling evidence that it is possible to measure innate intelligence through such instruments, in the sense of an innate learning capacity unaffected by experience, nor was this the original intention when they were developed. Nevertheless, IQ tests are useful tools for various purposes. An alternative conception of intelligence is that cognitive capacities within individuals are a manifestation of a general component, or general intelligence factor, as well as cognitive capacity specific to a given domain. Psychometrics is applied widely in educational assessment to measure abilities in domains such as reading, writing, and mathematics. The main approaches in applying tests in these domains have been Classical Test Theory and the more modern Item Response Theory and Rasch measurement models. These modern approaches permit joint scaling of persons and assessment items, which provides a basis for mapping of developmental continua by allowing descriptions of the skills displayed at various points along a continuum. Such approaches provide powerful information regarding the nature of developmental growth within various domains. Another major focus in psychometrics have been on personality testing. There have been a range of theoretical approaches to conceptualising and measuring personality. Some of the better known instruments include the Minnesota Multiphasic Personality Inventory, the Five-factor Model (or "Big 5") and the Myers-Briggs Type Indicator. Attitudes have also been studied extensively in psychometrics. A common approach to the measurement of attitudes is the use of the Likert scale. An alternative approach involves the application of unfolding measurement models, the most general being the Hyperbolic Cosine Model (Andrich & Luo, 1993). # Theoretical approaches Psychometric theory involves several distinct areas of study. First, psychometricians have developed a large body of theory used in the development of mental tests and analysis of data collected from these tests. This work can be roughly divided into classical test theory (CTT) and the more recent item response theory (IRT: Embretson & Reise, 2000; Hambleton & Swaminathan, 1985). An approach which is similar to IRT but also quite distinctive, in terms of its origins and features, is represented by the Rasch model for measurement. The development of the Rasch model, and the broader class of models to which it belongs, was explicitly founded on requirements of measurement in the physical sciences (Rasch, 1960). Second, psychometricians have developed methods for working with large matrices of correlations and covariances. Techniques in this general tradition include factor analysis (finding important underlying dimensions in the data), multidimensional scaling (finding a simple representation for high-dimensional data) and data clustering (finding objects which are like each other). In these multivariate descriptive methods, users try to simplify large amounts of data. More recently, structural equation modeling and path analysis represent more sophisticated approaches to solving this problem of large covariance matrices. These methods allow statistically sophisticated models to be fitted to data and tested to determine if they are adequate fits. One of the main deficiencies in various factor analysis is a lack of cutting points. A usual procedure is to stop factoring when eigenvalues drop below one because the original sphere shrinks. The lack of the cutting points concerns other multivariate methods, too. At the bottom, psychometric spaces are Hilbertian but they are dealt with as if Cartesian. Therefore, the problem is more of interpretations than utilizing a method. ## Key concepts The key traditional concepts in classical test theory are reliability and validity. A reliable measure is measuring something consistently, while a valid measure is measuring what it is supposed to measure. A reliable measure may be consistent without necessarily being valid, e.g., a measurement instrument like a broken ruler may always under-measure a quantity by the same amount each time (consistently), but the resulting quantity is still wrong, that is, invalid. For another analogy, a reliable rifle will have a tight cluster of bullets in the target, while a valid one will center its cluster around the center of the target, whether or not the cluster is a tight one. Both reliability and validity may be assessed mathematically. Internal consistency may be assessed by correlating performance on two halves of a test (split-half reliability); the value of the Pearson product-moment correlation coefficient is adjusted with the Spearman-Brown prediction formula to correspond to the correlation between two full-length tests. Other approaches include the intra-class correlation (the ratio of variance of measurements of a given target to the variance of all targets). A commonly used measure is Cronbach's α, which is equivalent to the mean of all possible split-half coefficients. Stability over repeated measures is assessed with the Pearson coefficient, as is the equivalence of different versions of the same measure (different forms of an intelligence test, for example). Other measures are also used. Validity may be assessed by correlating measures with a criterion measure known to be valid. When the criterion measure is collected at the same time as the measure being validated the goal is to establish concurrent validity; when the criterion is collected later the goal is to establish predictive validity. A measure has construct validity if it is related to other variables as required by theory. Content validity is simply a demonstration that the items of a test are drawn from the domain being measured. In a personnel selection example, test content is based on a defined statement or set of statements of knowledge, skill, ability, or other characteristics obtained from a job analysis. Predictive or concurrent validity cannot exceed the square of the correlation between two versions of the same measure. Item response theory models the relationship between latent traits and responses to test items. Among other advantages, IRT provides a basis for obtaining an estimate of the location of a test-taker on a given latent trait as well as the standard error of measurement of that location. For example, a university student's knowledge of history can be deduced from his or her score on a university test and then be compared reliably with a high school student's knowledge deduced from a less difficult test. Scores derived by classical test theory do not have this characteristic, and assessment of actual ability (rather than ability relative to other test-takers) must be assessed by comparing scores to those of a norm group randomly selected from the population. In fact, all measures derived from classical test theory are dependent on the sample tested, while, in principle, those derived from item response theory are not. # Standards of quality The considerations of validity and reliability typically are viewed as essential elements for determining the quality of any test. However, professional and practitioner associations frequently have placed these concerns within broader contexts when developing standards and making overall judgments about the quality of any test as a whole within a given context. A consideration of concern in many applied research settings is whether or not the metric of a given psychological inventory is meaningful or arbitrary. ## Testing standards In this field, the Standards for Educational and Psychological Testing place standards about validity and reliability, along with errors of measurement and related considerations under the general topic of test construction, evaluation and documentation. The second major topic covers standards related to fairness in testing, including fairness in testing and test use, the rights and responsibilities of test takers, testing individuals of diverse linguistic backgrounds, and testing individuals with disabilities. The third and final major topic covers standards related to testing applications, including the responsibilities of test users, psychological testing and assessment, educational testing and assessment, testing in employment and credentialing, plus testing in program evaluation and public policy. ## Evaluation standards In the field of evaluation, and in particular educational evaluation, the Joint Committee on Standards for Educational Evaluation has published three sets of standards for evaluations. The Personnel Evaluation Standards was published in 1988, The Program Evaluation Standards (2nd edition) was published in 1994, and The Student Evaluation Standards was published in 2003. Each publication presents and elaborates a set of standards for use in a variety of educational settings. The standards provide guidelines for designing, implementing, assessing and improving the identified form of evaluation. Each of the standards has been placed in one of four fundamental categories to promote educational evaluations that are proper, useful, feasible, and accurate. In these sets of standards, validity and reliability considerations are covered under the accuracy topic. For example, the student accuracy standards help ensure that student evaluations will provide sound, accurate, and credible information about student learning and performance.
Psychometrics Template:Psychology Psychometrics is the field of study concerned with the theory and technique of educational and psychological measurement, which includes the measurement of knowledge, abilities, attitudes, and personality traits. The field is primarily concerned with the study of differences between individuals and between groups of individuals. It involves two major research tasks, namely: (i) the construction of instruments and procedures for measurement; and (ii) the development and refinement of theoretical approaches to measurement. # Origins and background Much of the early theoretical and applied work in psychometrics was undertaken in an attempt to measure intelligence. Francis Galton is often referred to as the father of psychometrics, having devised and used mental tests. However, the origin of psychometrics also has connections to the related field of psychophysics. Charles Spearman, a pioneer in psychometrics who developed approaches to the measurement of intelligence, studied under Wilhelm Wundt and was trained in psychophysics. The psychometrician L. L. Thurstone later developed and applied a theoretical approach to the measurement referred to as the law of comparative judgment, an approach which has close connections to the psychophysical theory developed by Ernst Heinrich Weber and Gustav Fechner. In addition, Spearman and Thurstone both made important contributions to the theory and application of factor analysis, a statistical method that has been developed and used extensively in psychometrics. More recently, psychometric theory has been applied in the measurement of personality, attitudes and beliefs, academic achievement, and in health-related fields. Measurement of these unobservable phenomena is difficult, and much of the research and accumulated art in this discipline has been developed in an attempt to properly define and quantify such phenomena. Critics, including practitioners in the physical sciences and social activists, have argued that such definition and quantification is impossibly difficult, and that such measurements are often misused. Proponents of psychometric techniques can reply, though, that their critics often misuse data by not applying psychometric criteria, and also that various quantitative phenomena in the physical sciences, such as heat and forces, cannot be observed directly but must be inferred from their manifestations. Figures who made significant contributions to psychometrics include Karl Pearson, L. L. Thurstone, Georg Rasch, Johnson O'Connor, Frederick M. Lord and Arthur Jensen. # Definition of measurement in the social sciences The definition of measurement in the social sciences has a long history. A currently widespread definition, proposed by Stanley Smith Stevens (1946), is that measurement is "the assignment of numerals to objects or events according to some rule". This definition was introduced in the paper in which Stevens proposed four levels of measurement. Although widely adopted, this definition differs in important respects from the more classical definition of measurement adopted throughout the physical sciences, which is that measurement is the numerical estimation and expression of the magnitude of one quantity relative to another (Michell, 1997). Indeed, Stevens' definition of measurement was put forward in response to the British Ferguson Committee, whose chair, A. Ferguson, was a physicist. The committee was appointed in 1932 by the British Association for the Advancement of Science to investigate the possibility of quantitatively estimating sensory events. Although its chair and other members were physicists, the committee also comprised several psychologists. The committee's report highlighted the importance of the definition of measurement. While Stevens' response was to propose a new definition, which has had considerable influence in the field, this was by no means the only response to the report. Another, notably different, response was to accept the classical definition, as reflected in the following statement: These divergent responses are reflected to a large extent within alternative approaches to measurement. For example, methods based on covariance matrices are typically employed on the premise that numbers, such as raw scores derived from assessments, are measurements. Such approaches implicitly entail Stevens' definition of measurement, which requires only that numbers are assigned according to some rule. The main research task, then, is generally considered to be the discovery of associations between scores, and of factors posited to underlie such associations. On the other hand, when measurement models such as the Rasch model are employed, numbers are not assigned based on a rule. Instead, in keeping with Reese's statement above, specific criteria for measurement are stated, and the objective is to construct procedures or operations that provide data which meet the relevant criteria. Measurements are estimated based on the models, and tests are conducted to ascertain whether it has been possible to meet the relevant criteria. # Instruments and procedures The first psychometric instruments were designed to measure the concept of intelligence. The best known historical approach involves the Stanford-Binet IQ test, developed originally by the French Psychologist Alfred Binet. Contrary to a fairly widespread misconception, there is no compelling evidence that it is possible to measure innate intelligence through such instruments, in the sense of an innate learning capacity unaffected by experience, nor was this the original intention when they were developed. Nevertheless, IQ tests are useful tools for various purposes. An alternative conception of intelligence is that cognitive capacities within individuals are a manifestation of a general component, or general intelligence factor, as well as cognitive capacity specific to a given domain. Psychometrics is applied widely in educational assessment to measure abilities in domains such as reading, writing, and mathematics. The main approaches in applying tests in these domains have been Classical Test Theory and the more modern Item Response Theory and Rasch measurement models. These modern approaches permit joint scaling of persons and assessment items, which provides a basis for mapping of developmental continua by allowing descriptions of the skills displayed at various points along a continuum. Such approaches provide powerful information regarding the nature of developmental growth within various domains. Another major focus in psychometrics have been on personality testing. There have been a range of theoretical approaches to conceptualising and measuring personality. Some of the better known instruments include the Minnesota Multiphasic Personality Inventory, the Five-factor Model (or "Big 5") and the Myers-Briggs Type Indicator. Attitudes have also been studied extensively in psychometrics. A common approach to the measurement of attitudes is the use of the Likert scale. An alternative approach involves the application of unfolding measurement models, the most general being the Hyperbolic Cosine Model (Andrich & Luo, 1993). # Theoretical approaches Psychometric theory involves several distinct areas of study. First, psychometricians have developed a large body of theory used in the development of mental tests and analysis of data collected from these tests. This work can be roughly divided into classical test theory (CTT) and the more recent item response theory (IRT: Embretson & Reise, 2000; Hambleton & Swaminathan, 1985). An approach which is similar to IRT but also quite distinctive, in terms of its origins and features, is represented by the Rasch model for measurement. The development of the Rasch model, and the broader class of models to which it belongs, was explicitly founded on requirements of measurement in the physical sciences (Rasch, 1960). Second, psychometricians have developed methods for working with large matrices of correlations and covariances. Techniques in this general tradition include factor analysis (finding important underlying dimensions in the data), multidimensional scaling (finding a simple representation for high-dimensional data) and data clustering (finding objects which are like each other). In these multivariate descriptive methods, users try to simplify large amounts of data. More recently, structural equation modeling and path analysis represent more sophisticated approaches to solving this problem of large covariance matrices. These methods allow statistically sophisticated models to be fitted to data and tested to determine if they are adequate fits. One of the main deficiencies in various factor analysis is a lack of cutting points. A usual procedure is to stop factoring when eigenvalues drop below one because the original sphere shrinks. The lack of the cutting points concerns other multivariate methods, too. At the bottom, psychometric spaces are Hilbertian but they are dealt with as if Cartesian. Therefore, the problem is more of interpretations than utilizing a method. ## Key concepts The key traditional concepts in classical test theory are reliability and validity. A reliable measure is measuring something consistently, while a valid measure is measuring what it is supposed to measure. A reliable measure may be consistent without necessarily being valid, e.g., a measurement instrument like a broken ruler may always under-measure a quantity by the same amount each time (consistently), but the resulting quantity is still wrong, that is, invalid. For another analogy, a reliable rifle will have a tight cluster of bullets in the target, while a valid one will center its cluster around the center of the target, whether or not the cluster is a tight one. Both reliability and validity may be assessed mathematically. Internal consistency may be assessed by correlating performance on two halves of a test (split-half reliability); the value of the Pearson product-moment correlation coefficient is adjusted with the Spearman-Brown prediction formula to correspond to the correlation between two full-length tests. Other approaches include the intra-class correlation (the ratio of variance of measurements of a given target to the variance of all targets). A commonly used measure is Cronbach's α, which is equivalent to the mean of all possible split-half coefficients. Stability over repeated measures is assessed with the Pearson coefficient, as is the equivalence of different versions of the same measure (different forms of an intelligence test, for example). Other measures are also used. Validity may be assessed by correlating measures with a criterion measure known to be valid. When the criterion measure is collected at the same time as the measure being validated the goal is to establish concurrent validity; when the criterion is collected later the goal is to establish predictive validity. A measure has construct validity if it is related to other variables as required by theory. Content validity is simply a demonstration that the items of a test are drawn from the domain being measured. In a personnel selection example, test content is based on a defined statement or set of statements of knowledge, skill, ability, or other characteristics obtained from a job analysis. Predictive or concurrent validity cannot exceed the square of the correlation between two versions of the same measure. Item response theory models the relationship between latent traits and responses to test items. Among other advantages, IRT provides a basis for obtaining an estimate of the location of a test-taker on a given latent trait as well as the standard error of measurement of that location. For example, a university student's knowledge of history can be deduced from his or her score on a university test and then be compared reliably with a high school student's knowledge deduced from a less difficult test. Scores derived by classical test theory do not have this characteristic, and assessment of actual ability (rather than ability relative to other test-takers) must be assessed by comparing scores to those of a norm group randomly selected from the population. In fact, all measures derived from classical test theory are dependent on the sample tested, while, in principle, those derived from item response theory are not. # Standards of quality The considerations of validity and reliability typically are viewed as essential elements for determining the quality of any test. However, professional and practitioner associations frequently have placed these concerns within broader contexts when developing standards and making overall judgments about the quality of any test as a whole within a given context. A consideration of concern in many applied research settings is whether or not the metric of a given psychological inventory is meaningful or arbitrary.[1] ## Testing standards In this field, the Standards for Educational and Psychological Testing [2] place standards about validity and reliability, along with errors of measurement and related considerations under the general topic of test construction, evaluation and documentation. The second major topic covers standards related to fairness in testing, including fairness in testing and test use, the rights and responsibilities of test takers, testing individuals of diverse linguistic backgrounds, and testing individuals with disabilities. The third and final major topic covers standards related to testing applications, including the responsibilities of test users, psychological testing and assessment, educational testing and assessment, testing in employment and credentialing, plus testing in program evaluation and public policy. ## Evaluation standards In the field of evaluation, and in particular educational evaluation, the Joint Committee on Standards for Educational Evaluation [3] has published three sets of standards for evaluations. The Personnel Evaluation Standards [4] was published in 1988, The Program Evaluation Standards (2nd edition) [5] was published in 1994, and The Student Evaluation Standards [6] was published in 2003. Each publication presents and elaborates a set of standards for use in a variety of educational settings. The standards provide guidelines for designing, implementing, assessing and improving the identified form of evaluation. Each of the standards has been placed in one of four fundamental categories to promote educational evaluations that are proper, useful, feasible, and accurate. In these sets of standards, validity and reliability considerations are covered under the accuracy topic. For example, the student accuracy standards help ensure that student evaluations will provide sound, accurate, and credible information about student learning and performance.
https://www.wikidoc.org/index.php/Psychometric
26a9fc64f71ae26c2cd1c866b537bfd70041ae2f
wikidoc
Psychosurgery
Psychosurgery Psychosurgery is a term for surgeries of the brain involving procedures that modulate the performance of the brain, and thus effect changes in cognition, with the intent to treat or alleviate severe mental illness. It was originally thought that by severing the nerves that give power to ideas you would achieve the desirable result of a loss of affect and an emotional flattening which would diminish creativity and imagination; the idea being that those are the human characteristics that are disturbed. Historically, the procedure typically considered psychosurgery, prefrontal leukotomy is now almost universally shunned as inappropriate, due in part to the emergence of less-invasive or less-objectionable methods of treatment such as psychiatric medication and modified electroconvulsive therapy. In modern neurosurgery however, more minimally invasive techniques like gamma knife irradiation and foremost deep brain stimulation have arisen as novel tools for psychosurgery. # History There is evidence that trepanning (or trephining)—the practice of drilling holes in the skull for pseudomedical reasons—has been in widespread, if infrequent, use since 5000 BC. This may have been done in an attempt to allow the brain to expand in the case of increased brain fluid pressure, for example, after head injuries. (Several documented cases of healed wounds indicate that such crude surgery could be survived back then.) However, psychosurgery as understood today was not commonly practiced until the early 20th century. The first systematic attempts at human psychosurgery occurred from 1935, when the neurosurgeon Egas Moniz teamed up with the surgeon Almeida Lima at the University of Lisbon to perform a series of prefrontal lobotomies —a procedure severing the connection between the prefrontal cortex and the rest of the brain. Moniz and Lima claimed fair results, especially in the treatment of depression, although about 6% of patients did not survive the operation, and there were often marked and adverse changes in the patients' personality and social functioning. Despite the risks the process was taken up with some enthusiasm, notably in the U.S., as a treatment for previously incurable mental conditions. Moniz received a Nobel Prize in 1949. The initial criteria for treatment were quite steep—only a few conditions of "tortured self-concern" were put forward for treatment. Severe chronic anxiety, depression with risk of suicide and incapacitating obsessive-compulsive disorder were the main symptoms treated. The original lobotomy was a crude operation and the practice was soon developed into a more exact stereotactic procedure where only very small lesions were placed in the brain. ## "Ice pick lobotomy" Psychosurgery was popularised in the United States when Walter Freeman invented the "ice pick lobotomy", a procedure which literally used an ice pick and a rubber mallet instead of standard surgical equipment to perform a transorbital lobotomy. Leaving no visible scars, the ice pick lobotomy was heralded as a great advance in surgery, and was eventually done under local anesthesia accomplished through electroshock administered to the patient moments before the procedure. In what is now widely considered to be a highly invasive procedure, Freeman would hammer the ice pick into the skull just above the tear duct and wiggle it around. From 1936 through the 1950s, he advocated lobotomies throughout the United States. Such was Freeman's zeal that he began to travel around the nation in his own personal van, which he called his "lobotomobile", demonstrating the procedure in many medical centres. He reputedly even performed a few lobotomies in hotel rooms. Freeman's advocacy led to great popularity for lobotomy as a general cure for all perceived ills, including misbehaviour in children. Ultimately between 40,000 and 50,000 patients were lobotomised. A follow-up study of almost 10,000 patients claimed 41% were "recovered" or "greatly improved", 28% were "minimally improved", 25% showed "no change", 4% had died, while only 2% were made worse off (Tooth, et al. 1961). # Neurological effect The frontal lobe of the brain controls a number of advanced cognitive functions, as well as motor control. Motor control is located at the rear of the frontal lobe, and is usually unaffected by psychosurgery. The anterior or prefrontal area is involved in impulse control, judgement with everyday life and situations, language, memory, motor function, problem solving, sexual behaviour, socialization and spontaneity. Frontal lobes assist in planning, coordinating, controlling and executing behaviour. Thus, the efficacy of psychosurgery was often related to changes in personality and reduced spontaneity (this included making the person quieter and decreasing their craving to be sexually active). Certain processes related to schizophrenia are also believed to occur in the frontal lobe, and may explain some success. However, certain types of inappropriate behaviours increased as a function of reduced impulse control (in some respects they became more childlike). Further, it decreased their ability to function as a member of the community by reducing their problem solving and planning abilities and making them less flexible and adaptive. It usually had no bearing on IQ except with respect to problem solving. # Present day Lobotomies gradually became unfashionable with the development of antipsychotic drugs and are rarely performed. The era of lobotomy is now generally regarded as a barbaric episode in psychiatric history. There was a strong division amongst the medical profession as to the efficacy of the treatment, and concern over both the irreversible nature of the operation and to its extension into the treatment of unsuitable cases (drug or alcohol dependence, sexual disorders, etc). Psychosurgery was offered in only a few centres, and by the 1960s the number of operations was in decline. Signal improvements in psychopharmacology and behaviour therapy provided the opportunity for more effective and less-invasive treatment. Today, psychosurgery may be a treatment of last resort for OCD sufferers, and for anorexic patients in Chile, the United States, Sweden and Mexico. The efficacy is not high: one study of cingulotomy (which usually involves a 2–3 cm lesion in the cingulum near the corpus callosum) found improvement in 5 out of 18 patients (Baer et al., 1995). Psychosurgery is legally practiced in controlled and regulated U.S. centers, or in Finland, Sweden, United Kingdom, Spain, India, Belgium and Netherlands. In France, 32 psychosurgical operations were made between 1980 and 1986 according to an IGAS report; about 15 each year in the UK, 70 in Belgium, and about 15 for the Massachusetts General Hospital of Boston. Some consider use of endoscopic sympathetic block (a form of endoscopic thoracic sympathectomy) for patients with anxiety disorder to be a psychiatric treatment, despite it not being surgery of the brain. There is also renewed interest in using it to treat schizophrenia.. ESB disrupts brain regulation of many organs normally affected by emotion, such as the heart and blood vessels. A large study demonstrated significant reduction in "alertness" and "fear" in patients with social phobia as well as improvement in their quality of life . ESB for anxiety is advocated as an alternative by surgeons on the internet , most psychologists, however, prefer medication and counseling. # Legal restrictions In 1977, the U.S. Congress created a National Committee for the Protection of Human Subjects of Biomedical and Behavioral Research to investigate allegations that psychosurgery, including lobotomy techniques, was used to control minorities, restrain individual rights or that it had unethical after-effects. It concluded that, in general, psychosurgery had positive effects. However, concerns about lobotomy steadily grew, and countries such as Germany, Japan and several U.S. states prohibited it. In Australia, psychosurgery is performed by a select group of neurosurgeons. In Victoria, each individual operation must receive the consent of a Review Board before it may proceed. The Soviet Union made lobotomies illegal in 1950.. # Individuals who underwent lobotomy - Phineas Gage: Famously suffered an accident in 1848 which severely damaged his frontal lobe. The effects were comparable to a surgical lobotomy. - Josef Hassid: Polish violin prodigy and schizophrenic who died at 26. - Rosemary Kennedy: Sister of John F. Kennedy. - Rose Williams: Sister of Tennessee Williams. - Howard Dully: One of Walter Freeman's youngest victims, author of My Lobotomy (2007) # Fictional examples - Frances Farmer: Though Farmer is the person perhaps best associated in the public mind with lobotomy due to its depiction in the fictionalized biographical film Frances, archival medical and other records have conclusively proven Farmer never underwent the procedure. The author who initially alleged the lobotomy later admitted in court he had made it up. (Footnoted site contains court transcripts which are also available through LexisNexis.) - Ken Kesey's famed fictional character, Randle Patrick McMurphy, in One Flew Over the Cuckoo's Nest who was, in the movie, played by Jack Nicholson. - J. Frank Parnell, erratic driver of the radioactive Chevy Malibu in the movie Repo Man. - A Hole in One, a 2004 movie about a young lady who wants an ice pick lobotomy during the height of its popularity. - Rat Korga, major character in Samuel R. Delany's science fiction novel Stars in My Pocket Like Grains of Sand, voluntarily opts for psychosurgery to make him content to be a slave. - Several victims of a serial killer named Gerry Schnauz in an episode of The X-Files entitled "Unruhe". - Session 9, a 2001 horror movie about a group of men hired to remove the asbestos from a defunct mental hospital. - Hannibal, in which Hannibal Lecter lobotomizes Paul Krendler, played by Ray Liotta. - In the book The Bell Jar by Sylvia Plath, the character Esther Greenwood meets a girl named Valerie in the asylum who has had a lobotomy. - Iron Maiden's famous fictional mascot, Eddie, was lobotomised on-stage during one of Maiden's live shows; this concert was filmed for German TV but that particular segment was cut out due to being deemed "Too violent". The cover of their fourth album Piece of Mind (and many of the following releases) shows Eddie after being lobotomised. - In the book Cyteen by C. J. Cherryh, psychosurgery involves the use of drugs that bring the mind into a state where it is very receptive to audio and/or visual cues, which help the psychosurgeon to reprogram the individual. This procedure is non-invasive, and involves administering drugs versus actual surgery. - In the television miniseries Kingdom Hospital, the character Mary was killed by a botched lobotomy. In the companion book, The Journals of Eleanor Druse, Eleanor had a transorbital lobotomy in her childhood.
Psychosurgery Psychosurgery is a term for surgeries of the brain involving procedures that modulate the performance of the brain, and thus effect changes in cognition, with the intent to treat or alleviate severe mental illness. It was originally thought that by severing the nerves that give power to ideas you would achieve the desirable result of a loss of affect and an emotional flattening which would diminish creativity and imagination; the idea being that those are the human characteristics that are disturbed. Historically, the procedure typically considered psychosurgery, prefrontal leukotomy is now almost universally shunned as inappropriate, due in part to the emergence of less-invasive or less-objectionable methods of treatment such as psychiatric medication and modified electroconvulsive therapy. In modern neurosurgery however, more minimally invasive techniques like gamma knife irradiation and foremost deep brain stimulation have arisen as novel tools for psychosurgery. # History There is evidence that trepanning (or trephining)—the practice of drilling holes in the skull for pseudomedical reasons—has been in widespread, if infrequent, use since 5000 BC. This may have been done in an attempt to allow the brain to expand in the case of increased brain fluid pressure, for example, after head injuries. (Several documented cases of healed wounds indicate that such crude surgery could be survived back then.) However, psychosurgery as understood today was not commonly practiced until the early 20th century. The first systematic attempts at human psychosurgery occurred from 1935, when the neurosurgeon Egas Moniz teamed up with the surgeon Almeida Lima at the University of Lisbon to perform a series of prefrontal lobotomies —a procedure severing the connection between the prefrontal cortex and the rest of the brain. Moniz and Lima claimed fair results, especially in the treatment of depression, although about 6% of patients did not survive the operation, and there were often marked and adverse changes in the patients' personality and social functioning. Despite the risks the process was taken up with some enthusiasm, notably in the U.S., as a treatment for previously incurable mental conditions. Moniz received a Nobel Prize in 1949. The initial criteria for treatment were quite steep—only a few conditions of "tortured self-concern" were put forward for treatment. Severe chronic anxiety, depression with risk of suicide and incapacitating obsessive-compulsive disorder were the main symptoms treated. The original lobotomy was a crude operation and the practice was soon developed into a more exact stereotactic procedure where only very small lesions were placed in the brain. ## "Ice pick lobotomy" Psychosurgery was popularised in the United States when Walter Freeman invented the "ice pick lobotomy", a procedure which literally used an ice pick and a rubber mallet instead of standard surgical equipment to perform a transorbital lobotomy. Leaving no visible scars, the ice pick lobotomy was heralded as a great advance in surgery, and was eventually done under local anesthesia accomplished through electroshock administered to the patient moments before the procedure.[citation needed] In what is now widely considered to be a highly invasive procedure, Freeman would hammer the ice pick into the skull just above the tear duct and wiggle it around. From 1936 through the 1950s, he advocated lobotomies throughout the United States. Such was Freeman's zeal that he began to travel around the nation in his own personal van, which he called his "lobotomobile", demonstrating the procedure in many medical centres.[1] He reputedly even performed a few lobotomies in hotel rooms.[citation needed] Freeman's advocacy led to great popularity for lobotomy as a general cure for all perceived ills, including misbehaviour in children. Ultimately between 40,000 and 50,000 patients were lobotomised. A follow-up study of almost 10,000 patients claimed 41% were "recovered" or "greatly improved", 28% were "minimally improved", 25% showed "no change", 4% had died, while only 2% were made worse off (Tooth, et al. 1961). # Neurological effect The frontal lobe of the brain controls a number of advanced cognitive functions, as well as motor control. Motor control is located at the rear of the frontal lobe, and is usually unaffected by psychosurgery. The anterior or prefrontal area is involved in impulse control, judgement with everyday life and situations, language, memory, motor function, problem solving, sexual behaviour, socialization and spontaneity. Frontal lobes assist in planning, coordinating, controlling and executing behaviour. Thus, the efficacy of psychosurgery was often related to changes in personality and reduced spontaneity (this included making the person quieter and decreasing their craving to be sexually active). Certain processes related to schizophrenia are also believed to occur in the frontal lobe, and may explain some success. However, certain types of inappropriate behaviours increased as a function of reduced impulse control (in some respects they became more childlike). Further, it decreased their ability to function as a member of the community by reducing their problem solving and planning abilities and making them less flexible and adaptive. It usually had no bearing on IQ except with respect to problem solving. # Present day Lobotomies gradually became unfashionable with the development of antipsychotic drugs and are rarely performed. The era of lobotomy is now generally regarded as a barbaric episode in psychiatric history. There was a strong division amongst the medical profession as to the efficacy of the treatment, and concern over both the irreversible nature of the operation and to its extension into the treatment of unsuitable cases (drug or alcohol dependence, sexual disorders, etc). Psychosurgery was offered in only a few centres, and by the 1960s the number of operations was in decline. Signal improvements in psychopharmacology and behaviour therapy provided the opportunity for more effective and less-invasive treatment. Today, psychosurgery may be a treatment of last resort for OCD sufferers, and for anorexic patients in Chile, the United States, Sweden and Mexico. The efficacy is not high: one study of cingulotomy (which usually involves a 2–3 cm lesion in the cingulum near the corpus callosum) found improvement in 5 out of 18 patients (Baer et al., 1995). Psychosurgery is legally practiced in controlled and regulated U.S. centers, or in Finland, Sweden, United Kingdom, Spain, India, Belgium and Netherlands. In France, 32 psychosurgical operations were made between 1980 and 1986 according to an IGAS report; about 15 each year in the UK, 70 in Belgium, and about 15 for the Massachusetts General Hospital of Boston.[2] Some consider use of endoscopic sympathetic block (a form of endoscopic thoracic sympathectomy) for patients with anxiety disorder to be a psychiatric treatment, despite it not being surgery of the brain. There is also renewed interest in using it to treat schizophrenia.[1]. ESB disrupts brain regulation of many organs normally affected by emotion, such as the heart and blood vessels. A large study demonstrated significant reduction in "alertness" and "fear" in patients with social phobia as well as improvement in their quality of life [2]. ESB for anxiety is advocated as an alternative by surgeons on the internet [3][4], most psychologists, however, prefer medication and counseling. # Legal restrictions In 1977, the U.S. Congress created a National Committee for the Protection of Human Subjects of Biomedical and Behavioral Research to investigate allegations that psychosurgery, including lobotomy techniques, was used to control minorities, restrain individual rights or that it had unethical after-effects. It concluded that, in general, psychosurgery had positive effects. However, concerns about lobotomy steadily grew, and countries such as Germany, Japan and several U.S. states prohibited it.[2] In Australia, psychosurgery is performed by a select group of neurosurgeons. In Victoria, each individual operation must receive the consent of a Review Board before it may proceed. The Soviet Union made lobotomies illegal in 1950.[3]. # Individuals who underwent lobotomy - Phineas Gage: Famously suffered an accident in 1848 which severely damaged his frontal lobe. The effects were comparable to a surgical lobotomy. - Josef Hassid: Polish violin prodigy and schizophrenic who died at 26. - Rosemary Kennedy: Sister of John F. Kennedy. - Rose Williams: Sister of Tennessee Williams. - Howard Dully: One of Walter Freeman's youngest victims, author of My Lobotomy (2007) # Fictional examples - Frances Farmer: Though Farmer is the person perhaps best associated in the public mind with lobotomy due to its depiction in the fictionalized biographical film Frances, archival medical and other records have conclusively proven Farmer never underwent the procedure. The author who initially alleged the lobotomy later admitted in court he had made it up.[5] (Footnoted site contains court transcripts which are also available through LexisNexis.) - Ken Kesey's famed fictional character, Randle Patrick McMurphy, in One Flew Over the Cuckoo's Nest who was, in the movie, played by Jack Nicholson. - J. Frank Parnell, erratic driver of the radioactive Chevy Malibu in the movie Repo Man. - A Hole in One, a 2004 movie about a young lady who wants an ice pick lobotomy during the height of its popularity. - Rat Korga, major character in Samuel R. Delany's science fiction novel Stars in My Pocket Like Grains of Sand, voluntarily opts for psychosurgery to make him content to be a slave. - Several victims of a serial killer named Gerry Schnauz in an episode of The X-Files entitled "Unruhe". - Session 9, a 2001 horror movie about a group of men hired to remove the asbestos from a defunct mental hospital. - Hannibal, in which Hannibal Lecter lobotomizes Paul Krendler, played by Ray Liotta. - In the book The Bell Jar by Sylvia Plath, the character Esther Greenwood meets a girl named Valerie in the asylum who has had a lobotomy. - Iron Maiden's famous fictional mascot, Eddie, was lobotomised on-stage during one of Maiden's live shows; this concert was filmed for German TV but that particular segment was cut out due to being deemed "Too violent". The cover of their fourth album Piece of Mind (and many of the following releases) shows Eddie after being lobotomised. - In the book Cyteen by C. J. Cherryh, psychosurgery involves the use of drugs that bring the mind into a state where it is very receptive to audio and/or visual cues, which help the psychosurgeon to reprogram the individual. This procedure is non-invasive, and involves administering drugs versus actual surgery. - In the television miniseries Kingdom Hospital, the character Mary was killed by a botched lobotomy. In the companion book, The Journals of Eleanor Druse, Eleanor had a transorbital lobotomy in her childhood.
https://www.wikidoc.org/index.php/Psychosurgery
3de97c544377b686a489e13bf7c15b64780f4c17
wikidoc
Psychotherapy
Psychotherapy # Overview Psychotherapy is an interpersonal, relational intervention used by trained psychotherapists to aid clients in problems of living. This usually includes increasing individual sense of well-being and reducing subjective discomforting experience. Psychotherapists employ a range of techniques based on experiential relationship building, dialogue, communication and behavior change and that are designed to improve the mental health of a client or patient, or to improve group relationships (such as in a family). # Forms Most forms of psychotherapy use only spoken conversation, though some also use various other forms of communication such as the written word, artwork, drama, narrative story, or therapeutic touch. Psychotherapy occurs within a structured encounter between a trained therapist and client(s). Purposeful, theoretically based psychotherapy began in the 19th century with psychoanalysis; since then, scores of other approaches have been developed and continue to be created. Therapy is generally used to respond to a variety of specific or non-specific manifestations of clinically diagnosable crises. Treatment of everyday problems is more often referred to as counseling (a distinction originally adopted by Carl Rogers) but the term is sometimes used interchangeably with "psychotherapy". Psychotherapeutic interventions are often designed to treat the patient in the medical model, although not all psychotherapeutic approaches follow the model of "illness/cure". Some practitioners, such as humanistic schools, see themselves in an educational or helper role. Because sensitive topics are often discussed during psychotherapy, therapists are expected, and usually legally bound, to respect client or patient confidentiality. # Systems of Psychotherapy There are several main systems of psychotherapy: - Cognitive behavioral - Psychodynamic - Existential - Humanistic/supportive - Brief therapy (sometimes called "strategic" therapy, solution focused brief therapy) - Systemic Therapy (including family therapy & marriage counseling) - Integrative Psychotherapy # History In an informal sense, psychotherapy can be said to have been practiced through the ages, as individuals received psychological counsel and reassurance from others. Purposeful, theoretically-based psychotherapy was probably first developed in the Middle East during the 9th century by the Persian physician Rhazes, who was at one time the chief physician of the Baghdad hospital. In the West, however, serious mental disorders were generally treated as demonic or medical conditions requiring punishment and confinement until the advent of moral treatment approaches in the 18th Century. This brought about a focus on the possibility of psychosocial intervention - including reasoning, moral encouragement and group activities - to rehabilitate the "insane". Psychoanalysis was perhaps the first specific school of psychotherapy, developed by Sigmund Freud and others through the early 1900s. Trained as a neurologist, Freud began focusing on problems that appeared to have no discernible organic basis, and theorized that they had psychological causes originating in childhood experiences and the unconscious mind. Techniques such as dream interpretation, free association, transference and analysis of the id, ego and superego were developed. Many theorists, including Anna Freud, Alfred Adler, Carl Jung, Karen Horney, Otto Rank, Erik Erikson, Melanie Klein, and Heinz Kohut, built upon Freud's fundamental ideas and often formed their own differentiating systems of psychotherapy. These were all later termed under a more broad label of psychodynamic, meaning anything that involved the psyche's conscious/unconscious influence on external relationships and the self. Sessions tended to number into the hundreds over several years. Behaviorism developed in the 1920s, and behavior modification as a therapy became popularized in the 1950s and 1960s. Notable contributors were Joseph Wolpe in South Africa, M.B. Shipiro and Hans Eysenck in Britain, and B.F. Skinner in the United States. Behavioral therapy approaches relied on principles of operant conditioning, classical conditioning and social learning theory to bring about therapeutic change in observable symptoms. The approach became commonly used for phobias, as well as other disorders. Some therapeutic approaches developed out of the European school of existential philosophy. Concerned mainly with the individual's ability to develop and preserve a sense of meaning and purpose throughout life, major contributors to the field (e.g., Irvin Yalom, Rollo May) and Europe (Viktor Frankl, Ludwig Binswanger, Medard Boss, R.D.Laing, Emmy van Deurzen) attempted to create therapies sensitive to common 'life crises' springing from the essential bleakness of human self awareness, previously accessible only through the complex writings of existential philosophers (e.g., Søren Kierkegaard, Jean-Paul Sartre, Gabriel Marcel, Martin Heidegger, Friedrich Nietzsche). The uniqueness of the patient-therapist relationship thus also forms a vehicle for therapeutic enquiry. A related body of thought in psychotherapy started in the 1950s with Carl Rogers. Based in existentialism and the works of Abraham Maslow and his hierarchy of human needs, Rogers brought person-centered psychotherapy into mainstream focus. Rogers' basic tenets were unconditional positive regard, genuineness, and empathic understanding, with each demonstrated by the counselor. The aim was to create a relationship conducive to enhancing the client's psychological well being, by enabling the client to fully experience and express themselves. Others developed the approach, like Fritz and Laura Perls in the creation of Gestalt therapy, as well as Marshall Rosenberg, founder of Nonviolent Communication, and Eric Berne, founder of Transactional Analysis. Later these fields of psychotherapy would become what is known as humanistic psychotherapy today. Self-help groups and books became widespread. During the 1950s, Albert Ellis developed Rational Emotive Behavior Therapy (REBT). A few years later, psychiatrist Aaron T. Beck developed a form of psychotherapy known as cognitive therapy. Both of these included short, structured and present-focused therapy aimed at changing a person's distorted thinking, by contrast with the long-lasting insight-based approach of psychodynamic or humanistic therapies. Cognitive and behavioral therapy approaches were combined during the 1970s, resulting in Cognitive behavioral therapy. Being oriented towards symptom-relief, collaborative empiricism and modifying peoples core beliefs, the approach gained widespread acceptance as a primary treatment for numerous disorders. A "third wave" of cognitive and behavioral therapies developed, including Acceptance and Commitment Therapy and Dialectical behavior therapy, which expanded the concepts to other disorders and/or added novel components. Counseling methods developed, including solution-focused therapy and systemic coaching. Postmodern psychotherapies such as Narrative Therapy and coherence therapy did not impose definitions of mental health and illness, but rather saw the goal of therapy as something constructed by the client and therapist in a social context. Systems Therapy also developed, which focuses on family and group dynamics—and Transpersonal psychology, which focuses on the spiritual facet of human experience. Other important orientations developed in the last three decades include Feminist therapy, Brief therapy, Somatic Psychology, Expressive therapy, and applied Positive psychology. A survey of over 2,500 US therapists in 2006 revealed the most utilised models of therapy and the ten most influential therapists of the previous quarter-century. # General Concerns Psychotherapy can be seen as an interpersonal invitation offered by (often trained and regulated) psychotherapists to aid clients in reaching their full potential or to cope better with problems of life. Psychotherapists usually receive a benefit or remuneration in some form in return for their time and skills. This is one way in which the relationship can be distinguished from an altruistic offer of assistance. Psychotherapy often includes techniques to increase awareness for example, or to enable other choices of thought, feeling or action; to increase the sense of well-being and to better manage subjective discomfort or distress. Psychotherapy can be provided on a one to one basis or in group therapy. It can occur face to face, over the telephone or the internet. Its time frame may be a matter of weeks or over many years. It can be seen as ultimately about agency and the meaning of life. Psychotherapy can also be seen as a social construct that cannot occur in a power vacuum nor without reference to semiotics (meaning systems and symbols) - irrespective of how practitioners may describe their work or research its effects. Therapy may address specific forms of diagnosable mental illness, or everyday problems in relationships or meeting personal goals. Treatment of everyday problems is more often referred to as counseling (a distinction originally adopted by Carl Rogers) but the term is sometimes used interchangeably with "psychotherapy". Psychotherapists employ a range of techniques to influence or pursuade the client to adapt or change in the direction the client has chosen. These can be based on clear thinking about their options; experiential relationship building; dialogue, communication and adoption of behavior change strategies. Each is designed to improve the mental health of a client or patient, or to improve group relationships (such as in a family). Most forms of psychotherapy use only spoken conversation, though some also use various other forms of communication such as the written word, artwork, drama, narrative story, or therapeutic touch. Psychotherapy occurs within a structured encounter between a trained therapist and client(s). Because sensitive topics are often discussed during psychotherapy, therapists are expected, and usually legally bound, to respect client or patient confidentiality. Psychotherapists are often trained, certified, and licensed, with a range of different certifications and licensing requirements in every jurisdiction. Psychotherapy may be undertaken by clinical psychologists, social workers, marriage-family therapists, expressive therapists, trained nurses, psychiatrists, psychoanalysts, mental health counselors, school counselors, or professionals of other mental health disciplines. Psychiatrists have medical qualifications and may also administer prescription medication. The primary training of a psychiatrist focuses on the biological aspects of mental health conditions, with some training in psychotherapy. Psychologists have more training in psychological assessment and research and, in addition, a great deal of training in psychotherapy. Social workers have specialized training in linking patients to community and institutional resources, in addition to elements of psychological assessment and psychotherapy. Marriage-Family Therapists have training similar to the social worker, and also have specific training and experience working with relationships and family issues. Licensed professional counselors (LPCs) generally have special training in career, mental health, school, or rehabilitation counseling. Many of the wide variety of training programs are multiprofessional, that is, psychiatrists, psychologists, mental health nurses, and social workers may be found in the same training group. Consequently, specialized psychotherapeutic training in most countries requires a program of continuing education after the basic degree, or involve multiple certifications attached to one specific degree. # Specific schools and approaches ## Scientific validation of different psychotherapeutic approaches In the psychotherapeutic community there has been discussion of evidence-based psychotherapy, e.g.. Virtually no comparisons of different psychotherapies with long follow-up times have been carried out. The Helsinki Psychotherapy Study is a randomized clinical trial, where patients are monitored for 12 months after the onset of study treatments, of which each lasted approximately 6 months. The assessments are to be completed at the baseline examination and during the follow-up after 3, 7, and 9 months and 1, 1.5, 2, 3, 4, 5, 6, and 7 years. The final results of this trial are yet to be published since follow-up evaluations will continue up to 2009. ## Psychoanalysis Psychoanalysis was the earliest form of psychotherapy, but many other theories and techniques are also now used by psychotherapists, psychologists, psychiatrists, personal growth facilitators, occupational therapists and social workers. Techniques for group therapy have been developed. While behaviour is often a target of the work, many approaches value working with feelings and thoughts. This is especially true of the psychodynamic schools of psychotherapy, which today include Jungian therapy and Psychodrama as well as the psychoanalytic schools. Other approaches focus on the link between the mind and body and try to access deeper levels of the psyche through manipulation of the physical body. Examples are Rolfing, Pulsing and postural integration. ## Gestalt Therapy Gestalt Therapy is a major overhaul psychoanalysis. In its early development it was called "concentration therapy" by its founders, Frederick and Laura Perls. However, its mix of theoretical influences became most organized around the work of the gestalt psychologists; thus, by the time Gestalt Therapy, Excitement and Growth in the Human Personality (Perls, Hefferline, and Goodman) was written, the approach became known as "Gestalt Therapy." Gestalt Therapy stands on top of essentially four load bearing theoretical walls: phenomenological method, dialogical relationship, field-theoretical strategies, and experimental freedom. Some have considered it an existential phenomenology while others have described it as a phenomenological behaviorism. Gestalt therapy is a humanistic, holistic, and experiential approach that does not rely on talking alone, but facilitates awareness in the various contexts of life by moving from talking about situations relatively remote to action and direct, current experience. ## Group Psychotherapy The therapeutic use of groups in modern clinical practice can be traced to the early years of the 20th century, when the American chest physician Pratt, working in Boston, described forming 'classes' of fifteen to twenty patients with tuberculosis who had been rejected for sanatorium treatment. The term 'group therapy', however, was first used around 1920 by Jacob L. Moreno, whose main contribution was the development of psychodrama, in which groups were used as both cast and audience for the exploration of individual problems by reenactment under the direction of the leader. The more analytic and exploratory use of groups in both hospital and out-patient settings was pioneered by a few European psychoanalysts who emigrated to the USA, such as Paul Schilder, who treated severely neurotic and mildly psychotic out-patients in small groups at Bellevue Hospital, New York. The power of groups was most influentially demonstrated in Britain during the Second World War, when several psychoanalysts and psychiatrists proved the value of group methods for officer selection in the War Office Selection Boards. A chance to run an Army psychiatric unit on group lines was then given to several of these pioneers, notably Wilfred Bion and Rickman, followed by S. H. Foulkes, Main, and Bridger. The Northfield Hospital in Birmingham gave its name to what came to be called the two 'Northfield Experiments', which provided the impetus for the development since the war of both social therapy, that is, the therapeutic community movement, and the use of small groups for the treatment of neurotic and personality disorders. ## Medical and non-medical models A distinction can also be made between those psychotherapies that employ a medical model and those that employ a humanistic model. In the medical model the client is seen as unwell and the therapist employs their skill to help the client back to health. The extensive use of the DSM-IV, the diagnostic and statistical manual of mental disorders in the United States, is an example of a medically-exclusive model. In the humanistic model, the therapist facilitates learning in the individual and the client's own natural process draws them to a fuller understanding of themselves. An example would be gestalt therapy. Some psychodynamic practitioners distinguish between more uncovering and more supportive psychotherapy. Uncovering psychotherapy emphasizes facilitating the client's insight into the roots of their difficulties. The best-known example of an uncovering psychotherapy is classical psychoanalysis. Supportive psychotherapy by contrast stresses strengthening the client's defenses and often providing encouragement and advice. Depending on the client's personality, a more supportive or more uncovering approach may be optimal. Most psychotherapists use a combination of uncovering and supportive approaches. ## Cognitive therapy Cognitive behavioral therapy focuses on modifying everyday thoughts and behaviors, with the aim of positively influencing emotions. The therapist helps clients recognise distorted thinking and learn to replace unhealthy thoughts with more realistic substitute ideas. This approach includes Dialectical behavior therapy. ## Expressive therapy Expressive therapy is a form of therapy that utilizes artistic expression as its core means of treating clients. Expressive therapists use the different disciplines of the creative arts as therapeutic interventions. This includes the modalities dance therapy, drama therapy, art therapy, music therapy, writing therapy, among others. Expressive therapists believe that often the most effective way of treating a client is through the expression of imagination in a creative work and integrating and processing what issues are raised in the act. ## Integrative Psychotherapy Integrative Psychotherapy represents an attempt to combine ideas and strategies from more than one theoretical approach. These approaches include mixing core beliefs and combining proven techniques. Forms of integrative psychotherapy include Multimodal Therapy, the Transtheoretical Model, Cyclical Psychodynamics, Systematic Treatment Selection, Cognitive Analytic Therapy, Internal Family Systems Model, and Multitheoretical Psychotherapy. In practice, most experienced psychotherapists develop their own integrative approach over time. ## Adaptations for children Counseling and psychotherapy must be adapted to meet the developmental needs of children. Many counseling preparation programs include a courses in human development. Since children often do not have the ability to articulate thoughts and feelings, counselors will use a variety of media such as crayons, paint, clay, puppets, bibliocounseling (books), toys, et cetera. The use of play therapy is often rooted in psychodynamic theory, but other approaches such as Solution Focused Brief Counseling may also employ the use of play in counseling. In many cases the counselor may prefer to work with the care taker of the child, especially if the child is younger than age four. # The therapeutic relationship Research has shown that the quality of the relationship between the therapist and the client has a greater influence on client outcomes than the specific type of psychotherapy used by the therapist (this was first suggested by Saul Rosenzweig in 1936 ). Accordingly, most contemporary schools of psychotherapy focus on the healing power of the therapeutic relationship. This research is extensively discussed (with many references) in Hubble, Duncan and Miller (1999) (quotes in this section are from this book) and in Wampold (2001) . A literature review by M. J. Lambert (1992) estimated that 40% of client changes are due to extratherapeutic influences, 30% are due to the quality of the therapeutic relationship, 15% are due to expectancy (placebo) effects, and 15% are due to specific techniques. Extratherapeutic influences include client motivation and the severity of the problem: For example, a withdrawn, alcoholic client, who is "dragged into therapy" by his or her spouse, possesses poor motivation for therapy, regards mental health professionals with suspicion, and harbors hostility toward others, is not nearly as likely to find relief as the client who is eager to discover how he or she has contributed to a failing marriage and expresses determination to make personal changes. In one study, some highly motivated clients showed measurable improvement before their first session with the therapist, suggesting that just making the appointment can be an indicator of readiness to change. Tallman and Bohart (1999) note that: Outside of therapy people rarely have a friend who will truly listen to them for more than 20 minutes (Stiles, 1995)... Further, friends and relatives often are involved in the problem and therefore do not provide a "safe outside perspective" which may be required. Nonetheless, as noted above, people often solve their problems by talking to friends, relatives, co-workers, religious leaders, or some other confidant in their lives, or by thinking and exploring themselves. ## Confidentiality Confidentiality is an integral part of the therapeutic relationship and psychotherapy in general. # Effectiveness and criticism There is considerable controversy over which form of psychotherapy is most effective, and more specifically, which types of therapy are optimal for treating which sorts of problems. The dropout level is quite high, one meta-analysis of 125 studies concluded that mean dropout rate was 46.86%. The high level of dropout has raised some criticism about the relevance and efficacy of psychotherapy. Psychotherapy outcome research—in which the effectiveness of psychotherapy is measured by questionnaires given to patients before, during, and after treatment—has had difficulty distinguishing between the success or failure of the different approaches to therapy. Not surprisingly, those who stay with their therapist for longer periods are more likely to report positively on what develops into a longer term relationship. Of course, this might mean that "treatment" is open-ended and related concerns regarding the total financial costs. As early as 1952, in one of the earliest studies of psychotherapy treatment, Hans Eysenck reported that two thirds of therapy patients improved significantly or recovered on their own within two years, whether or not they received psychotherapy. Many psychotherapists believe that the nuances of psychotherapy cannot be captured by questionnaire-style observation, and prefer to rely on their own clinical experiences and conceptual arguments to support the type of treatment they practice. This means that "if you believe you are doing some good, you are," a conception of dubious merit. In 2001 Bruce Wampold, Ph.D. of the University of Wisconsin published "The Great Psychotherapy Debate". In it Wampold, a former statistician studying primarily outcomes with depressed patients, reported that - psychotherapy can be more effective than placebo, - no single treatment modality has the edge in efficacy, - factors common to different psychotherapies, such as whether or not the therapist has established a positive working alliance with the client/patient, account for much more of the variance in outcomes than specific techniques or modalities. Some report that by attempting to program or manualize treatment psychotherapists may actually be reducing efficacy, although the unstructured approach of many psychotherapists cannot appeal to patients motived to solve their difficulties through the application of specific techniques different from their past "mistakes." Critics of psychotherapy are skeptical of the healing power of a psychotherapeutic relationship. Since any intervention takes time, critics note that the passage of time, without therapeutic intervention, can result in psycho-social healing despite the absence of counseling. Critics note the many resources available to a person experiencing emotional distress—the friendly support of friends, peers, family members, clergy contacts, personal reading, research, and independent coping—indicating that psychotherapy is inappropriate or unneeded by many. These critics note that humans have been dealing with crisis, navigating problems and finding solutions long before the advent of psychotherapy. Some psychotherapeutics have answered to scientific critique saying that psychotherapy is not a science since it is a craft. Further critiques have emerged from feminist, constructionist and discursive sources. Key to these is the issue of power. In this regard there is a concern that clients are persuaded—both inside and outside of the consulting room—to understand themselves and their difficulties in ways that are consistent with therapeutic ideas. This means that alternative ideas (e.g., feminist, economic, spiritual) are sometimes implicitly undermined. Critics suggest that we idealise the situation when we think of therapy only as a helping relation. It is also fundamentally a political practice, in that some cultural ideas and practices are supported while others are undermined or disqualified. So, while it is seldom intended, the therapist-client relationship always participates in society's power relations and political dynamics.
Psychotherapy Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1] # Overview Psychotherapy is an interpersonal, relational intervention used by trained psychotherapists to aid clients in problems of living. This usually includes increasing individual sense of well-being and reducing subjective discomforting experience. Psychotherapists employ a range of techniques based on experiential relationship building, dialogue, communication and behavior change and that are designed to improve the mental health of a client or patient, or to improve group relationships (such as in a family). # Forms Most forms of psychotherapy use only spoken conversation, though some also use various other forms of communication such as the written word, artwork, drama, narrative story, or therapeutic touch. Psychotherapy occurs within a structured encounter between a trained therapist and client(s). Purposeful, theoretically based psychotherapy began in the 19th century with psychoanalysis; since then, scores of other approaches have been developed and continue to be created. Therapy is generally used to respond to a variety of specific or non-specific manifestations of clinically diagnosable crises. Treatment of everyday problems is more often referred to as counseling (a distinction originally adopted by Carl Rogers) but the term is sometimes used interchangeably with "psychotherapy". Psychotherapeutic interventions are often designed to treat the patient in the medical model, although not all psychotherapeutic approaches follow the model of "illness/cure". Some practitioners, such as humanistic schools, see themselves in an educational or helper role. Because sensitive topics are often discussed during psychotherapy, therapists are expected, and usually legally bound, to respect client or patient confidentiality. # Systems of Psychotherapy There are several main systems of psychotherapy: - Cognitive behavioral - Psychodynamic - Existential - Humanistic/supportive - Brief therapy (sometimes called "strategic" therapy, solution focused brief therapy) - Systemic Therapy (including family therapy & marriage counseling) - Integrative Psychotherapy # History In an informal sense, psychotherapy can be said to have been practiced through the ages, as individuals received psychological counsel and reassurance from others. Purposeful, theoretically-based psychotherapy was probably first developed in the Middle East during the 9th century by the Persian physician Rhazes, who was at one time the chief physician of the Baghdad hospital. In the West, however, serious mental disorders were generally treated as demonic or medical conditions requiring punishment and confinement until the advent of moral treatment approaches in the 18th Century. This brought about a focus on the possibility of psychosocial intervention - including reasoning, moral encouragement and group activities - to rehabilitate the "insane". Psychoanalysis was perhaps the first specific school of psychotherapy, developed by Sigmund Freud and others through the early 1900s. Trained as a neurologist, Freud began focusing on problems that appeared to have no discernible organic basis, and theorized that they had psychological causes originating in childhood experiences and the unconscious mind. Techniques such as dream interpretation, free association, transference and analysis of the id, ego and superego were developed. Many theorists, including Anna Freud, Alfred Adler, Carl Jung, Karen Horney, Otto Rank, Erik Erikson, Melanie Klein, and Heinz Kohut, built upon Freud's fundamental ideas and often formed their own differentiating systems of psychotherapy. These were all later termed under a more broad label of psychodynamic, meaning anything that involved the psyche's conscious/unconscious influence on external relationships and the self. Sessions tended to number into the hundreds over several years. Behaviorism developed in the 1920s, and behavior modification as a therapy became popularized in the 1950s and 1960s. Notable contributors were Joseph Wolpe in South Africa, M.B. Shipiro and Hans Eysenck in Britain, and B.F. Skinner in the United States. Behavioral therapy approaches relied on principles of operant conditioning, classical conditioning and social learning theory to bring about therapeutic change in observable symptoms. The approach became commonly used for phobias, as well as other disorders. Some therapeutic approaches developed out of the European school of existential philosophy. Concerned mainly with the individual's ability to develop and preserve a sense of meaning and purpose throughout life, major contributors to the field (e.g., Irvin Yalom, Rollo May) and Europe (Viktor Frankl, Ludwig Binswanger, Medard Boss, R.D.Laing, Emmy van Deurzen) attempted to create therapies sensitive to common 'life crises' springing from the essential bleakness of human self awareness, previously accessible only through the complex writings of existential philosophers (e.g., Søren Kierkegaard, Jean-Paul Sartre, Gabriel Marcel, Martin Heidegger, Friedrich Nietzsche). The uniqueness of the patient-therapist relationship thus also forms a vehicle for therapeutic enquiry. A related body of thought in psychotherapy started in the 1950s with Carl Rogers. Based in existentialism and the works of Abraham Maslow and his hierarchy of human needs, Rogers brought person-centered psychotherapy into mainstream focus. Rogers' basic tenets were unconditional positive regard, genuineness, and empathic understanding, with each demonstrated by the counselor. The aim was to create a relationship conducive to enhancing the client's psychological well being, by enabling the client to fully experience and express themselves. Others developed the approach, like Fritz and Laura Perls in the creation of Gestalt therapy, as well as Marshall Rosenberg, founder of Nonviolent Communication, and Eric Berne, founder of Transactional Analysis. Later these fields of psychotherapy would become what is known as humanistic psychotherapy today. Self-help groups and books became widespread. During the 1950s, Albert Ellis developed Rational Emotive Behavior Therapy (REBT). A few years later, psychiatrist Aaron T. Beck developed a form of psychotherapy known as cognitive therapy. Both of these included short, structured and present-focused therapy aimed at changing a person's distorted thinking, by contrast with the long-lasting insight-based approach of psychodynamic or humanistic therapies. Cognitive and behavioral therapy approaches were combined during the 1970s, resulting in Cognitive behavioral therapy. Being oriented towards symptom-relief, collaborative empiricism and modifying peoples core beliefs, the approach gained widespread acceptance as a primary treatment for numerous disorders. A "third wave" of cognitive and behavioral therapies developed, including Acceptance and Commitment Therapy and Dialectical behavior therapy, which expanded the concepts to other disorders and/or added novel components. Counseling methods developed, including solution-focused therapy and systemic coaching. Postmodern psychotherapies such as Narrative Therapy and coherence therapy did not impose definitions of mental health and illness, but rather saw the goal of therapy as something constructed by the client and therapist in a social context. Systems Therapy also developed, which focuses on family and group dynamics—and Transpersonal psychology, which focuses on the spiritual facet of human experience. Other important orientations developed in the last three decades include Feminist therapy, Brief therapy, Somatic Psychology, Expressive therapy, and applied Positive psychology. A survey of over 2,500 US therapists in 2006 revealed the most utilised models of therapy and the ten most influential therapists of the previous quarter-century.[1] # General Concerns Psychotherapy can be seen as an interpersonal invitation offered by (often trained and regulated) psychotherapists to aid clients in reaching their full potential or to cope better with problems of life. Psychotherapists usually receive a benefit or remuneration in some form in return for their time and skills. This is one way in which the relationship can be distinguished from an altruistic offer of assistance. Psychotherapy often includes techniques to increase awareness for example, or to enable other choices of thought, feeling or action; to increase the sense of well-being and to better manage subjective discomfort or distress. Psychotherapy can be provided on a one to one basis or in group therapy. It can occur face to face, over the telephone or the internet. Its time frame may be a matter of weeks or over many years. It can be seen as ultimately about agency and the meaning of life. Psychotherapy can also be seen as a social construct that cannot occur in a power vacuum nor without reference to semiotics (meaning systems and symbols) - irrespective of how practitioners may describe their work or research its effects. Therapy may address specific forms of diagnosable mental illness, or everyday problems in relationships or meeting personal goals. Treatment of everyday problems is more often referred to as counseling (a distinction originally adopted by Carl Rogers) but the term is sometimes used interchangeably with "psychotherapy". Psychotherapists employ a range of techniques to influence or pursuade the client to adapt or change in the direction the client has chosen. These can be based on clear thinking about their options; experiential relationship building; dialogue, communication and adoption of behavior change strategies. Each is designed to improve the mental health of a client or patient, or to improve group relationships (such as in a family). Most forms of psychotherapy use only spoken conversation, though some also use various other forms of communication such as the written word, artwork, drama, narrative story, or therapeutic touch. Psychotherapy occurs within a structured encounter between a trained therapist and client(s). Because sensitive topics are often discussed during psychotherapy, therapists are expected, and usually legally bound, to respect client or patient confidentiality. Psychotherapists are often trained, certified, and licensed, with a range of different certifications and licensing requirements in every jurisdiction. Psychotherapy may be undertaken by clinical psychologists, social workers, marriage-family therapists, expressive therapists, trained nurses, psychiatrists, psychoanalysts, mental health counselors, school counselors, or professionals of other mental health disciplines. Psychiatrists have medical qualifications and may also administer prescription medication. The primary training of a psychiatrist focuses on the biological aspects of mental health conditions, with some training in psychotherapy. Psychologists have more training in psychological assessment and research and, in addition, a great deal of training in psychotherapy. Social workers have specialized training in linking patients to community and institutional resources, in addition to elements of psychological assessment and psychotherapy. Marriage-Family Therapists have training similar to the social worker, and also have specific training and experience working with relationships and family issues. Licensed professional counselors (LPCs) generally have special training in career, mental health, school, or rehabilitation counseling. Many of the wide variety of training programs are multiprofessional, that is, psychiatrists, psychologists, mental health nurses, and social workers may be found in the same training group. Consequently, specialized psychotherapeutic training in most countries requires a program of continuing education after the basic degree, or involve multiple certifications attached to one specific degree. # Specific schools and approaches ## Scientific validation of different psychotherapeutic approaches In the psychotherapeutic community there has been discussion of evidence-based psychotherapy, e.g.[2]. Virtually no comparisons of different psychotherapies with long follow-up times have been carried out. [3] The Helsinki Psychotherapy Study [4] is a randomized clinical trial, where patients are monitored for 12 months after the onset of study treatments, of which each lasted approximately 6 months. The assessments are to be completed at the baseline examination and during the follow-up after 3, 7, and 9 months and 1, 1.5, 2, 3, 4, 5, 6, and 7 years. The final results of this trial are yet to be published since follow-up evaluations will continue up to 2009. ## Psychoanalysis Psychoanalysis was the earliest form of psychotherapy, but many other theories and techniques are also now used by psychotherapists, psychologists, psychiatrists, personal growth facilitators, occupational therapists and social workers. Techniques for group therapy have been developed. While behaviour is often a target of the work, many approaches value working with feelings and thoughts. This is especially true of the psychodynamic schools of psychotherapy, which today include Jungian therapy and Psychodrama as well as the psychoanalytic schools. Other approaches focus on the link between the mind and body and try to access deeper levels of the psyche through manipulation of the physical body. Examples are Rolfing, Pulsing and postural integration.[citation needed] ## Gestalt Therapy Gestalt Therapy is a major overhaul psychoanalysis. In its early development it was called "concentration therapy" by its founders, Frederick and Laura Perls. However, its mix of theoretical influences became most organized around the work of the gestalt psychologists; thus, by the time Gestalt Therapy, Excitement and Growth in the Human Personality (Perls, Hefferline, and Goodman) was written, the approach became known as "Gestalt Therapy." Gestalt Therapy stands on top of essentially four load bearing theoretical walls: phenomenological method, dialogical relationship, field-theoretical strategies, and experimental freedom. Some have considered it an existential phenomenology while others have described it as a phenomenological behaviorism. Gestalt therapy is a humanistic, holistic, and experiential approach that does not rely on talking alone, but facilitates awareness in the various contexts of life by moving from talking about situations relatively remote to action and direct, current experience. ## Group Psychotherapy The therapeutic use of groups in modern clinical practice can be traced to the early years of the 20th century, when the American chest physician Pratt, working in Boston, described forming 'classes' of fifteen to twenty patients with tuberculosis who had been rejected for sanatorium treatment[citation needed]. The term 'group therapy', however, was first used around 1920 by Jacob L. Moreno, whose main contribution was the development of psychodrama, in which groups were used as both cast and audience for the exploration of individual problems by reenactment under the direction of the leader. The more analytic and exploratory use of groups in both hospital and out-patient settings was pioneered by a few European psychoanalysts who emigrated to the USA, such as Paul Schilder, who treated severely neurotic and mildly psychotic out-patients in small groups at Bellevue Hospital, New York. The power of groups was most influentially demonstrated in Britain during the Second World War, when several psychoanalysts and psychiatrists proved the value of group methods for officer selection in the War Office Selection Boards. A chance to run an Army psychiatric unit on group lines was then given to several of these pioneers, notably Wilfred Bion and Rickman, followed by S. H. Foulkes, Main, and Bridger. The Northfield Hospital in Birmingham gave its name to what came to be called the two 'Northfield Experiments', which provided the impetus for the development since the war of both social therapy, that is, the therapeutic community movement, and the use of small groups for the treatment of neurotic and personality disorders. ## Medical and non-medical models A distinction can also be made between those psychotherapies that employ a medical model and those that employ a humanistic model. In the medical model the client is seen as unwell and the therapist employs their skill to help the client back to health. The extensive use of the DSM-IV, the diagnostic and statistical manual of mental disorders in the United States, is an example of a medically-exclusive model. In the humanistic model, the therapist facilitates learning in the individual and the client's own natural process draws them to a fuller understanding of themselves. An example would be gestalt therapy. Some psychodynamic practitioners distinguish between more uncovering and more supportive psychotherapy. Uncovering psychotherapy emphasizes facilitating the client's insight into the roots of their difficulties. The best-known example of an uncovering psychotherapy is classical psychoanalysis. Supportive psychotherapy by contrast stresses strengthening the client's defenses and often providing encouragement and advice. Depending on the client's personality, a more supportive or more uncovering approach may be optimal. Most psychotherapists use a combination of uncovering and supportive approaches. ## Cognitive therapy Cognitive behavioral therapy focuses on modifying everyday thoughts and behaviors, with the aim of positively influencing emotions. The therapist helps clients recognise distorted thinking and learn to replace unhealthy thoughts with more realistic substitute ideas. This approach includes Dialectical behavior therapy. ## Expressive therapy Expressive therapy is a form of therapy that utilizes artistic expression as its core means of treating clients. Expressive therapists use the different disciplines of the creative arts as therapeutic interventions. This includes the modalities dance therapy, drama therapy, art therapy, music therapy, writing therapy, among others. Expressive therapists believe that often the most effective way of treating a client is through the expression of imagination in a creative work and integrating and processing what issues are raised in the act. ## Integrative Psychotherapy Integrative Psychotherapy represents an attempt to combine ideas and strategies from more than one theoretical approach.[5] These approaches include mixing core beliefs and combining proven techniques. Forms of integrative psychotherapy include Multimodal Therapy, the Transtheoretical Model, Cyclical Psychodynamics, Systematic Treatment Selection, Cognitive Analytic Therapy, Internal Family Systems Model, and Multitheoretical Psychotherapy. In practice, most experienced psychotherapists develop their own integrative approach over time. ## Adaptations for children Counseling and psychotherapy must be adapted to meet the developmental needs of children. Many counseling preparation programs include a courses in human development. Since children often do not have the ability to articulate thoughts and feelings, counselors will use a variety of media such as crayons, paint, clay, puppets, bibliocounseling (books), toys, et cetera. The use of play therapy is often rooted in psychodynamic theory, but other approaches such as Solution Focused Brief Counseling may also employ the use of play in counseling. In many cases the counselor may prefer to work with the care taker of the child, especially if the child is younger than age four. # The therapeutic relationship Research has shown that the quality of the relationship between the therapist and the client has a greater influence on client outcomes than the specific type of psychotherapy used by the therapist (this was first suggested by Saul Rosenzweig in 1936 [6]). Accordingly, most contemporary schools of psychotherapy focus on the healing power of the therapeutic relationship. This research is extensively discussed (with many references) in Hubble, Duncan and Miller (1999)[7] (quotes in this section are from this book) and in Wampold (2001) [8]. A literature review by M. J. Lambert (1992) [9] estimated that 40% of client changes are due to extratherapeutic influences, 30% are due to the quality of the therapeutic relationship, 15% are due to expectancy (placebo) effects, and 15% are due to specific techniques. Extratherapeutic influences include client motivation and the severity of the problem: For example, a withdrawn, alcoholic client, who is "dragged into therapy" by his or her spouse, possesses poor motivation for therapy, regards mental health professionals with suspicion, and harbors hostility toward others, is not nearly as likely to find relief as the client who is eager to discover how he or she has contributed to a failing marriage and expresses determination to make personal changes. In one study, some highly motivated clients showed measurable improvement before their first session with the therapist, suggesting that just making the appointment can be an indicator of readiness to change. Tallman and Bohart (1999) [10] note that: Outside of therapy people rarely have a friend who will truly listen to them for more than 20 minutes (Stiles, 1995)[11]... Further, friends and relatives often are involved in the problem and therefore do not provide a "safe outside perspective" which may be required. Nonetheless, as noted above, people often solve their problems by talking to friends, relatives, co-workers, religious leaders, or some other confidant in their lives, or by thinking and exploring themselves. ## Confidentiality Confidentiality is an integral part of the therapeutic relationship and psychotherapy in general. # Effectiveness and criticism There is considerable controversy over which form of psychotherapy is most effective, and more specifically, which types of therapy are optimal for treating which sorts of problems.[12] The dropout level is quite high, one meta-analysis of 125 studies concluded that mean dropout rate was 46.86%.[13] The high level of dropout has raised some criticism about the relevance and efficacy of psychotherapy. Psychotherapy outcome research—in which the effectiveness of psychotherapy is measured by questionnaires given to patients before, during, and after treatment—has had difficulty distinguishing between the success or failure of the different approaches to therapy. Not surprisingly, those who stay with their therapist for longer periods are more likely to report positively on what develops into a longer term relationship. Of course, this might mean that "treatment" is open-ended and related concerns regarding the total financial costs. As early as 1952, in one of the earliest studies of psychotherapy treatment, Hans Eysenck reported that two thirds of therapy patients improved significantly or recovered on their own within two years, whether or not they received psychotherapy.[14] Many psychotherapists believe that the nuances of psychotherapy cannot be captured by questionnaire-style observation, and prefer to rely on their own clinical experiences and conceptual arguments to support the type of treatment they practice. This means that "if you believe you are doing some good, you are," a conception of dubious merit. In 2001 Bruce Wampold, Ph.D. of the University of Wisconsin published "The Great Psychotherapy Debate"[15]. In it Wampold, a former statistician studying primarily outcomes with depressed patients, reported that - psychotherapy can be more effective than placebo, - no single treatment modality has the edge in efficacy, - factors common to different psychotherapies, such as whether or not the therapist has established a positive working alliance with the client/patient, account for much more of the variance in outcomes than specific techniques or modalities. Some report that by attempting to program or manualize treatment psychotherapists may actually be reducing efficacy, although the unstructured approach of many psychotherapists cannot appeal to patients motived to solve their difficulties through the application of specific techniques different from their past "mistakes." Critics of psychotherapy are skeptical of the healing power of a psychotherapeutic relationship.[16] Since any intervention takes time, critics note that the passage of time, without therapeutic intervention, can result in psycho-social healing despite the absence of counseling.[17] Critics note the many resources available to a person experiencing emotional distress—the friendly support of friends, peers, family members, clergy contacts, personal reading, research, and independent coping—indicating that psychotherapy is inappropriate or unneeded by many. These critics note that humans have been dealing with crisis, navigating problems and finding solutions long before the advent of psychotherapy.[18] Some psychotherapeutics have answered to scientific critique saying that psychotherapy is not a science since it is a craft.[19] Further critiques have emerged from feminist, constructionist and discursive sources. Key to these is the issue of power. In this regard there is a concern that clients are persuaded—both inside and outside of the consulting room—to understand themselves and their difficulties in ways that are consistent with therapeutic ideas. This means that alternative ideas (e.g., feminist, economic, spiritual) are sometimes implicitly undermined. Critics suggest that we idealise the situation when we think of therapy only as a helping relation. It is also fundamentally a political practice, in that some cultural ideas and practices are supported while others are undermined or disqualified. So, while it is seldom intended, the therapist-client relationship always participates in society's power relations and political dynamics.[20]
https://www.wikidoc.org/index.php/Psychotherapeutic
f5539a3b02902bade3347f36e9c151ebea54495f
wikidoc
Public health
Public health Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. # Overview Public health is the study and practice of managing threats to the health of a community. The field pays special attention to the social context of disease and health, and focuses on improving health through society-wide measures like vaccinations, the fluoridation of drinking water, or through policies like seatbelt and non-smoking laws. The goal of public health is to improve lives through the prevention or treatment of disease. The United Nations' World Health Organization defines health as "a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity." In 1920, C.E.A. Winslow defined public health as "the science and art of preventing disease, prolonging life and promoting health through the organized efforts and informed choices of society, organizations, public and private, communities and individuals." The public-health approach can be applied to a population of just a handful of people or to the whole human population. Public health is typically divided into epidemiology, biostatistics and health services. Environmental, social, behavioral, and occupational health are also important subfields. # Objectives The focus of a public health intervention is to prevent rather than treat a disease through surveillance of cases and the promotion of healthy behaviors. In addition to these activities, in many cases treating a disease can be vital to preventing its spread to others, such as during an outbreak of infectious disease or contamination of food or water supplies. Vaccination programs and distribution of condoms are examples of public health measures. Most countries have their own government public health agencies, sometimes known as ministries of health, to respond to domestic health issues. In the United States, the frontline of public health initiatives are state and local health departments. The United States Public Health Service (PHS), led by the Surgeon General of the United States, and the Centers for Disease Control and Prevention, headquartered in Atlanta and a part of the PHS, are involved with several international health activities, in addition to their national duties. There is a vast discrepancy in access to healthcare and public health intiatives between developed nations and developing nations. In the developing world, public health infrastructures are still forming. There may not be enough trained health workers or monetary resources to provide even a basic level of medical care and disease prevention. As a result, a large majority of disease and mortality in the developing world results from and contributes to extreme poverty. For example, many African governments spend less than USD$10 per person per year on healthcare, while, in the United States, the federal government spent approximately USD$4,500 per capita in 2000. Many diseases are preventable through simple, non-medical methods. For example, research has shown that the simple act of hand washing can prevent many contagious diseases. Public health plays an important role in disease prevention efforts in both the developing world and in developed countries, through local health systems and through international non-governmental organizations, like the International Public Health Forum (IPHF) The two major postgraduate professional degrees related to this field are the Master of Public Health (MPH) or the (much rarer) Doctor of Public Health (DrPH). Many public health researchers hold PhDs in their fields of speciality, while some public health programs confer the equivalent Doctor of Science degree instead. The United States medical residency specialty is General Preventive Medicine and Public Health. # History of public health In some ways, public health is a modern concept, although it has roots in antiquity. From the beginnings of human civilization, it was recognized that polluted water and lack of proper waste disposal spread communicable diseases (theory of miasma). Early religions attempted to regulate behavior that specifically related to health, from types of food eaten, to regulating certain indulgent behaviors, such as drinking alcohol or sexual relations. The establishment of governments placed responsibility on leaders to develop public health policies and programs in order to gain some understanding of the causes of disease and thus ensure social stability prosperity, and maintain order. ## Early public health interventions By Roman times, it was well understood that proper diversion of human waste was a necessary tenet of public health in urban areas. The Chinese developed the practice of variolation following a smallpox epidemic around 1000 BC. An individual without the disease could gain some measure of immunity against it by inhaling the dried crusts that formed around lesions of infected individuals. Also, children were protected by inoculating a scratch on their forearms with the pus from a lesion. This practice was not documented in the West until the early-1700s, and was used on a very limited basis. The practice of vaccination did not become prevalent until the 1820s, following the work of Edward Jenner to treat smallpox. During the 14th century Black Death in Europe, it was believed that removing bodies of the dead would further prevent the spread of the bacterial infection. This did little to stem the plague, however, which was most likely spread by rodent-borne fleas. Burning parts of cities resulted in much greater benefit, since it destroyed the rodent infestations. The development of quarantine in the medieval period helped mitigate the effects of other infectious diseases. However, according to Michel Foucault, the plague model of governmentality was later controverted by the cholera model. A Cholera pandemic devastated Europe between 1829 and 1851, and was first fought by the use of what Foucault called "social medicine", which focused on flux, circulation of air, location of cemeteries, etc. All those concerns, born of the miasma theory of disease, were mixed with urbanistic concerns for the management of populations, which Foucault designated as the concept of "biopower". The German conceptualized this in the Polizeiwissenschaft ("Science of police"). The science of epidemiology was founded by John Snow's identification of a polluted public water well as the source of an 1854 cholera outbreak in London. Dr. Snow believed in the germ theory of disease as opposed to the prevailing miasma theory. Although miasma theory correctly teaches that disease is a result of poor sanitation, it was based upon the prevailing theory of spontaneous generation. Germ theory developed slowly: despite Anton van Leeuwenhoek's observations of Microorganisms, (which are now known to cause many of the most common infectious diseases) in the year 1680 , the modern era of public health did not begin until the 1880s, with Robert Koch's germ theory and Louis Pasteur's production of artificial vaccines. Other public health interventions include latrinization, the building of sewers, the regular collection of garbage followed by incineration or disposal in a landfill, providing clean water and draining standing water to prevent the breeding of mosquitos. ## Modern public health As the prevalence of infectious diseases in the developed world decreased through the 20th century, public health began to put more focus on chronic diseases such as cancer and heart disease. An emphasis on physical exercise was reintroduced. In America, public health worker Dr. Sara Josephine Baker lowered the infant mortality rate using preventative methods. She established many programs to help the poor in New York City keep their infants healthy. Dr. Baker led teams of nurses into the crowded neighborhoods of Hell's Kitchen and taught mothers how to dress, feed, and bathe their babies. After WWI many states and countries followed her example in order to lower infant mortality rates. During the 20th century, the dramatic increase in average life span is widely credited to public health achievements, such as vaccination programs and control of infectious diseases, effective safety policies such as motor-vehicle and occupational safety, improved family planning, fluoridation of drinking water, anti-smoking measures, and programs designed to decrease chronic disease. Meanwhile, the developing world remained plagued by largely preventable infectious diseases, exacerbated by malnutrition and poverty. Front-page headlines continue to present society with public health issues on a daily basis: emerging infectious diseases such as SARS, making its way from China to Canada and the United States; prescription drug benefits under public programs such as Medicare; the increase of HIV-AIDS among young heterosexual women and its spread in South Africa; the increase of childhood obesity and the concomitant increase in type II diabetes among children; the impact of adolescent pregnancy; and the ongoing social, economic and health disasters related to the 2005 Tsunami and Hurricane Katrina in 2006. These are all ongoing public health challenges. Since the 1980s, the growing field of population health has broadened the focus of public health from individual behaviors and risk factors to population-level issues such as inequality, poverty, and education. Modern public health is often concerned with addressing determinants of health across a population, rather than advocating for individual behaviour change. There is a recognition that our health is affected by many factors including where we live, genetics, our income, our educational status and our social relationships - these are known as "social determinants of health." A social gradient in health runs through society, with those that are poorest generally suffering the worst health. However even those in the middle classes will generally have worse health outcomes than those of a higher social stratum (WHO, 2003). The new public health seeks to address these health inequalities by advocating for population-based policies that improve the health of the whole population in an equitable fashion. The burden of treating conditions caused by unemployment, poverty, unfit housing and environmental pollution have been calculated to account for between 16-22% of the clinical budget of the British National Health Service. UK Public health functions include: - Health surveillance, monitoring and analysis - Investigation of disease outbreaks, epidemics and risk to health - Establishing, designing and managing health promotion and disease prevention programmes - Enabling and empowering communities to promote health and reduce inequalities - Creating and sustaining cross-Government and intersectoral partnerships to improve health and reduce inequalities - Ensuring compliance with regulations and laws to protect and promote health - Developing and maintaining a well-educated and trained, multi-disciplinary public health workforce - Ensuring the effective performance of NHS services to meet goals in improving health, preventing disease and reducing inequalities - Research, development, evaluation and innovation - Quality assuring the public health function # Public health programs Today, most governments recognize the importance of public health programs in reducing the incidence of disease, disability, and the effects of aging, although public health generally receives significantly less government funding compared with medicine. In recent years, public health programs providing vaccinations have made incredible strides in promoting health, including the eradication of smallpox, a disease that plagued humanity for thousands of years. One of the most important public health issues facing the world currently is HIV/AIDS. Tuberculosis, which claimed the lives of authors Franz Kafka and Charlotte Brontë, and composer Franz Schubert, among others, is also reemerging as a major concern due to the rise of HIV/AIDS-related infections and the development of tuberculin strains that are resistant to standard antibiotics. Another major public health concern is diabetes. In 2006, according to the World Health Organization, at least 171 million people worldwide suffered from diabetes. Its incidence is increasing rapidly, and it is estimated that by the year 2030, this number will double. A controversial aspect of public health is the control of smoking. Many nations have implemented major initiatives to cut smoking, such as increased taxation and bans on smoking in some or all public places. Proponents argue by presenting evidence that smoking is one of the major killers in all developed countries, and that therefore governments have a duty to reduce the death rate, both through limiting passive (second-hand) smoking and by providing fewer opportunities for smokers to smoke. Opponents say that this undermines individual freedom and personal responsibility (often using the phrase nanny state in the UK), and worry that the state may be emboldened to remove more and more choice in the name of better population health overall. However, proponents counter that inflicting disease on other people via passive smoking is not a human right, and in fact smokers are still free to smoke in their own homes. # Public Hygiene Public hygiene includes public behaviors individuals can take to improve their personal health and wellness. Topics include public transportation, food preparation and public washroom use. These are steps individuals can take themselves. Examples would include avoiding crowded subways during the flu season, using gloves when touching the handrails and opening doors in public malls as well as going to clean restaurants. # Economics of public health The application of economics to the realm of public health has been rising in importance since the 1980s. Economic studies can show, for example, where limited public resources might best be spent to save lives or cause the greatest increase in quality of life. # Research Public health investigates sources of disease and descriptors of health through scientific methodology. This can lead to a public health solution to an epidemic, or a community based intervention for chronic diseases. Either way, research can provide the link between cause and effect for public health issues. ## Community based participatory research In contrast to clinical, patient oriented, or literature review research, community based participatory research (CBPR) investigates community-based etiology, involves community leaders, and overall respects the forces under which the community and its participants preside toward promoting and sustaining public health matters. As described by the WK Kellogg Foundation Community Health Scholars Program, CBPR is a "collaborative approach to research that equitably involves all partners in the research process and recognizes the unique strengths that each brings. CBPR begins with a research topic of importance to the community, has the aim of combining knowledge with action and achieving social change to improve health outcomes and eliminate health disparities." CBPR methods have been necessary for implementation of certain public health actions. This have been difficult to accomplish because communities in poorer, less well developed areas often distrust researchers and scientists from "outside." # Academic resources - American Journal of Public Health - Annual Review of Public Health, ISSN: 15452093 (electronic) 0163-7525 (paper), Annual Reviews - Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science, ISSN 1538-7135, Mary Ann Lieber - Central Asia Health Review, New York based independent magazine - International Journal of Prisoner Health, ISSN: 1744-9219 (electronic) 1744-9200 (paper), Taylor & Francis - Journal of Health, Population and NutritionISSN: 1606 0997 - Journal of Public Health Management and Practice, ISSN: 1078-4659, Lippincott William & Wilkins - 1468-2869 Journal of Urban Health, ISSN: (electronic) 1099-3460 (paper) , Springer - Public Health Nutrition, ISSN: 1475-2727 (electronic) 1368-9800 (paper), Cambridge - Public Health Reports, ISSN: 0033-3549 - Scandinavian Journal of Public Health, ISSN: 1651-1905 (electronic) 1403-4948 (paper), Informa Healthcare - The European Journal of Public Health, ISSN: 1464-360X (electronic) 1101-1262 (paper), Oxford University Press - The Journal of Infectious Diseases, ISSN: 0022-1899, The University of Chicago Press
Public health Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1] Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us [2] to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. # Overview Public health is the study and practice of managing threats to the health of a community. The field pays special attention to the social context of disease and health, and focuses on improving health through society-wide measures like vaccinations, the fluoridation of drinking water, or through policies like seatbelt and non-smoking laws. The goal of public health is to improve lives through the prevention or treatment of disease. The United Nations' World Health Organization defines health as "a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity." In 1920, C.E.A. Winslow defined public health as "the science and art of preventing disease, prolonging life and promoting health through the organized efforts and informed choices of society, organizations, public and private, communities and individuals." The public-health approach can be applied to a population of just a handful of people or to the whole human population. Public health is typically divided into epidemiology, biostatistics and health services. Environmental, social, behavioral, and occupational health are also important subfields. # Objectives The focus of a public health intervention is to prevent rather than treat a disease through surveillance of cases and the promotion of healthy behaviors. In addition to these activities, in many cases treating a disease can be vital to preventing its spread to others, such as during an outbreak of infectious disease or contamination of food or water supplies. Vaccination programs and distribution of condoms are examples of public health measures. Most countries have their own government public health agencies, sometimes known as ministries of health, to respond to domestic health issues. In the United States, the frontline of public health initiatives are state and local health departments. The United States Public Health Service (PHS), led by the Surgeon General of the United States, and the Centers for Disease Control and Prevention, headquartered in Atlanta and a part of the PHS, are involved with several international health activities, in addition to their national duties. There is a vast discrepancy in access to healthcare and public health intiatives between developed nations and developing nations. In the developing world, public health infrastructures are still forming. There may not be enough trained health workers or monetary resources to provide even a basic level of medical care and disease prevention. As a result, a large majority of disease and mortality in the developing world results from and contributes to extreme poverty. For example, many African governments spend less than USD$10 per person per year on healthcare, while, in the United States, the federal government spent approximately USD$4,500 per capita in 2000. Many diseases are preventable through simple, non-medical methods. For example, research has shown that the simple act of hand washing can prevent many contagious diseases.[3] Public health plays an important role in disease prevention efforts in both the developing world and in developed countries, through local health systems and through international non-governmental organizations, like the International Public Health Forum (IPHF) The two major postgraduate professional degrees related to this field are the Master of Public Health (MPH) or the (much rarer) Doctor of Public Health (DrPH). Many public health researchers hold PhDs in their fields of speciality, while some public health programs confer the equivalent Doctor of Science degree instead. The United States medical residency specialty is General Preventive Medicine and Public Health. # History of public health In some ways, public health is a modern concept, although it has roots in antiquity. From the beginnings of human civilization, it was recognized that polluted water and lack of proper waste disposal spread communicable diseases (theory of miasma). Early religions attempted to regulate behavior that specifically related to health, from types of food eaten, to regulating certain indulgent behaviors, such as drinking alcohol or sexual relations. The establishment of governments placed responsibility on leaders to develop public health policies and programs in order to gain some understanding of the causes of disease and thus ensure social stability prosperity, and maintain order. ## Early public health interventions By Roman times, it was well understood that proper diversion of human waste was a necessary tenet of public health in urban areas. The Chinese developed the practice of variolation following a smallpox epidemic around 1000 BC. An individual without the disease could gain some measure of immunity against it by inhaling the dried crusts that formed around lesions of infected individuals. Also, children were protected by inoculating a scratch on their forearms with the pus from a lesion. This practice was not documented in the West until the early-1700s, and was used on a very limited basis. The practice of vaccination did not become prevalent until the 1820s, following the work of Edward Jenner to treat smallpox. During the 14th century Black Death in Europe, it was believed that removing bodies of the dead would further prevent the spread of the bacterial infection. This did little to stem the plague, however, which was most likely spread by rodent-borne fleas. Burning parts of cities resulted in much greater benefit, since it destroyed the rodent infestations. The development of quarantine in the medieval period helped mitigate the effects of other infectious diseases. However, according to Michel Foucault, the plague model of governmentality was later controverted by the cholera model. A Cholera pandemic devastated Europe between 1829 and 1851, and was first fought by the use of what Foucault called "social medicine", which focused on flux, circulation of air, location of cemeteries, etc. All those concerns, born of the miasma theory of disease, were mixed with urbanistic concerns for the management of populations, which Foucault designated as the concept of "biopower". The German conceptualized this in the Polizeiwissenschaft ("Science of police"). The science of epidemiology was founded by John Snow's identification of a polluted public water well as the source of an 1854 cholera outbreak in London. Dr. Snow believed in the germ theory of disease as opposed to the prevailing miasma theory. Although miasma theory correctly teaches that disease is a result of poor sanitation, it was based upon the prevailing theory of spontaneous generation. Germ theory developed slowly: despite Anton van Leeuwenhoek's observations of Microorganisms, (which are now known to cause many of the most common infectious diseases) in the year 1680 , the modern era of public health did not begin until the 1880s, with Robert Koch's germ theory and Louis Pasteur's production of artificial vaccines. Other public health interventions include latrinization, the building of sewers, the regular collection of garbage followed by incineration or disposal in a landfill, providing clean water and draining standing water to prevent the breeding of mosquitos. ## Modern public health As the prevalence of infectious diseases in the developed world decreased through the 20th century, public health began to put more focus on chronic diseases such as cancer and heart disease. An emphasis on physical exercise was reintroduced. In America, public health worker Dr. Sara Josephine Baker lowered the infant mortality rate using preventative methods. She established many programs to help the poor in New York City keep their infants healthy. Dr. Baker led teams of nurses into the crowded neighborhoods of Hell's Kitchen and taught mothers how to dress, feed, and bathe their babies. After WWI many states and countries followed her example in order to lower infant mortality rates. During the 20th century, the dramatic increase in average life span is widely credited to public health achievements, such as vaccination programs and control of infectious diseases, effective safety policies such as motor-vehicle and occupational safety, improved family planning, fluoridation of drinking water, anti-smoking measures, and programs designed to decrease chronic disease. Meanwhile, the developing world remained plagued by largely preventable infectious diseases, exacerbated by malnutrition and poverty. Front-page headlines continue to present society with public health issues on a daily basis: emerging infectious diseases such as SARS, making its way from China to Canada and the United States; prescription drug benefits under public programs such as Medicare; the increase of HIV-AIDS among young heterosexual women and its spread in South Africa; the increase of childhood obesity and the concomitant increase in type II diabetes among children; the impact of adolescent pregnancy; and the ongoing social, economic and health disasters related to the 2005 Tsunami and Hurricane Katrina in 2006. These are all ongoing public health challenges. Since the 1980s, the growing field of population health has broadened the focus of public health from individual behaviors and risk factors to population-level issues such as inequality, poverty, and education. Modern public health is often concerned with addressing determinants of health across a population, rather than advocating for individual behaviour change. There is a recognition that our health is affected by many factors including where we live, genetics, our income, our educational status and our social relationships - these are known as "social determinants of health." A social gradient in health runs through society, with those that are poorest generally suffering the worst health. However even those in the middle classes will generally have worse health outcomes than those of a higher social stratum (WHO, 2003). The new public health seeks to address these health inequalities by advocating for population-based policies that improve the health of the whole population in an equitable fashion. The burden of treating conditions caused by unemployment, poverty, unfit housing and environmental pollution have been calculated to account for between 16-22% of the clinical budget of the British National Health Service. [4] UK Public health functions include: - Health surveillance, monitoring and analysis - Investigation of disease outbreaks, epidemics and risk to health - Establishing, designing and managing health promotion and disease prevention programmes - Enabling and empowering communities to promote health and reduce inequalities - Creating and sustaining cross-Government and intersectoral partnerships to improve health and reduce inequalities - Ensuring compliance with regulations and laws to protect and promote health - Developing and maintaining a well-educated and trained, multi-disciplinary public health workforce - Ensuring the effective performance of NHS services to meet goals in improving health, preventing disease and reducing inequalities - Research, development, evaluation and innovation - Quality assuring the public health function # Public health programs Today, most governments recognize the importance of public health programs in reducing the incidence of disease, disability, and the effects of aging, although public health generally receives significantly less government funding compared with medicine. In recent years, public health programs providing vaccinations have made incredible strides in promoting health, including the eradication of smallpox, a disease that plagued humanity for thousands of years. One of the most important public health issues facing the world currently is HIV/AIDS. Tuberculosis, which claimed the lives of authors Franz Kafka and Charlotte Brontë, and composer Franz Schubert, among others, is also reemerging as a major concern due to the rise of HIV/AIDS-related infections and the development of tuberculin strains that are resistant to standard antibiotics. Another major public health concern is diabetes. In 2006, according to the World Health Organization, at least 171 million people worldwide suffered from diabetes. Its incidence is increasing rapidly, and it is estimated that by the year 2030, this number will double. A controversial aspect of public health is the control of smoking. Many nations have implemented major initiatives to cut smoking, such as increased taxation and bans on smoking in some or all public places. Proponents argue by presenting evidence that smoking is one of the major killers in all developed countries, and that therefore governments have a duty to reduce the death rate, both through limiting passive (second-hand) smoking and by providing fewer opportunities for smokers to smoke. Opponents say that this undermines individual freedom and personal responsibility (often using the phrase nanny state in the UK), and worry that the state may be emboldened to remove more and more choice in the name of better population health overall. However, proponents counter that inflicting disease on other people via passive smoking is not a human right, and in fact smokers are still free to smoke in their own homes. # Public Hygiene Public hygiene includes public behaviors individuals can take to improve their personal health and wellness. Topics include public transportation, food preparation and public washroom use. These are steps individuals can take themselves. Examples would include avoiding crowded subways during the flu season, using gloves when touching the handrails and opening doors in public malls as well as going to clean restaurants. # Economics of public health The application of economics to the realm of public health has been rising in importance since the 1980s. Economic studies can show, for example, where limited public resources might best be spent to save lives or cause the greatest increase in quality of life. # Research Public health investigates sources of disease and descriptors of health through scientific methodology. This can lead to a public health solution to an epidemic, or a community based intervention for chronic diseases. Either way, research can provide the link between cause and effect for public health issues. ## Community based participatory research In contrast to clinical, patient oriented, or literature review research, community based participatory research (CBPR) investigates community-based etiology, involves community leaders, and overall respects the forces under which the community and its participants preside toward promoting and sustaining public health matters. As described by the WK Kellogg Foundation Community Health Scholars Program, CBPR is a "collaborative approach to research that equitably involves all partners in the research process and recognizes the unique strengths that each brings. CBPR begins with a research topic of importance to the community, has the aim of combining knowledge with action and achieving social change to improve health outcomes and eliminate health disparities."[1] CBPR methods have been necessary for implementation of certain public health actions. This have been difficult to accomplish because communities in poorer, less well developed areas often distrust researchers and scientists from "outside."[2] # Academic resources - American Journal of Public Health - Annual Review of Public Health, ISSN: 15452093 (electronic) 0163-7525 (paper), Annual Reviews - Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science, ISSN 1538-7135, Mary Ann Lieber - Central Asia Health Review, New York based independent magazine - International Journal of Prisoner Health, ISSN: 1744-9219 (electronic) 1744-9200 (paper), Taylor & Francis - Journal of Health, Population and NutritionISSN: 1606 0997 - Journal of Public Health Management and Practice, ISSN: 1078-4659, Lippincott William & Wilkins - 1468-2869 Journal of Urban Health, ISSN: (electronic) 1099-3460 (paper) , Springer - Public Health Nutrition, ISSN: 1475-2727 (electronic) 1368-9800 (paper), Cambridge - Public Health Reports, ISSN: 0033-3549 - Scandinavian Journal of Public Health, ISSN: 1651-1905 (electronic) 1403-4948 (paper), Informa Healthcare - The European Journal of Public Health, ISSN: 1464-360X (electronic) 1101-1262 (paper), Oxford University Press - The Journal of Infectious Diseases, ISSN: 0022-1899, The University of Chicago Press
https://www.wikidoc.org/index.php/Public_Health
ea06e7b37af780fb78c4b7036a586afaac3104f4
wikidoc
Public domain
Public domain Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. # Overview Public domain comprises the body of knowledge and innovation (especially creative works such as writing, art, music, and inventions) in relation to which no person or other legal entity can establish or maintain proprietary interests within a particular legal jurisdiction. This body of information and creativity is considered to be part of a common cultural and intellectual heritage, which, in general, anyone may use or exploit, whether for commercial or non-commercial purposes. About 15 percent of all books are in the public domain, including 10 percent of all books that are still in print. If an item ("work") is not in the public domain, it may be the result of a proprietary interest such as a copyright, patent, or other sui generis right. The extent to which members of the public may use or exploit the work is limited to the extent of the proprietary interests in the relevant legal jurisdiction. However, when the copyright, patent or other proprietary restrictions expire, the work enters the public domain and may be used by anyone for any purpose. # No legal restriction on use A creative work is said to be in the public domain if there are no laws which restrict its use by the public at large. For instance, a work may be in the public domain if no laws establish proprietary rights over the work, or if the work or its subject matter are specifically excluded from existing laws. Because proprietary rights are founded in national laws, an item may be public domain in one jurisdiction but not another. For instance, some works of literature are public domain in the United States but not in the European Union and vice versa. The underlying idea that is expressed or manifested in the creation of a work generally cannot be the subject of copyright law (see idea-expression divide). Mathematical formulae will therefore generally form part of the public domain, to the extent that their expression in the form of software is not covered by copyright; however, algorithms can be the subject of a software patent in some jurisdictions. Works created before the existence of copyright and patent laws also form part of the public domain. The Bible and the inventions of Archimedes are in the public domain. However, copyright may exist in translations or new formulations of these works. Although "intellectual property" laws are not designed to prevent facts from entering the public domain, collections of facts organized or presented in a creative way, such as categorized lists, may be copyrighted. Collections of data with intuitive organization, such as alphabetized directories like telephone directories, are generally not copyrightable. In some countries copyright-like rights are granted for databases, even those containing mere facts. A sui generis database rights regime is in place in the European Union. Works of the United States Government and various other governments are excluded from copyright law and may therefore be considered to be in the public domain in their respective countries. They may also be in the public domain in other countries as well. # Expiration All copyrights and patents have always had a finite term, though the terms for copyrights and patents differ. When terms expire, the work or invention is released into public domain, in most countries, this is 20 years. A trademark registration may be renewed and remain in force indefinitely provided the trademark is used, but could otherwise become generic. Copyrights are more complex than patents; generally, in current law, the copyright in a published work expires in all countries (except Colombia, Côte d'Ivoire, Guatemala, Honduras, Mexico, Samoa, and Saint Vincent and the Grenadines) when any of the following conditions are satisfied : - The work was created and first published before January 1, 1923, or at least 95 years before January 1 of the current year, whichever is later; - The last surviving author died at least 70 years before January 1 of the current year; - No Berne Convention signatory has passed a perpetual copyright on the work; and - Neither the United States nor the European Union has passed a copyright term extension since these conditions were last updated. (This must be a condition because the exact numbers in the other conditions depend on the state of the law at any given moment.) These conditions are based on the intersection of United States and European Union copyright law, which most other Berne Convention signatories recognize. Note that copyright term extension under U.S. tradition usually does not restore copyright to public domain works (hence the 1923 date), but European tradition does because the EU harmonization was based on the copyright term in Germany, which had already been extended to life plus 70. ## United States law Copyright law in the United States has changed several times. Although it is held under Feist v. Rural that Congress does not have the power to re-copyright works that have fallen into the public domain, re-copyrighting has happened: "After World War I and after World War II, there were special amendments to the Copyright Act to permit for a limited time and under certain conditions the recapture of works that might have fallen into the public domain, principally by aliens of countries with which we had been at war." Works created by an agency of the United States government are public domain at the moment of creation. Examples include military journalism, federal court opinions (but not necessarily state court opinions), congressional committee reports, and census data. However, works created by a contractor for the government are still subject to copyright. Even public domain documents may have their availability limited by laws limiting the spread of classified information. ### Since 1978 Before 1978, unpublished works were not covered by the federal copyright act. Rather, they were covered under (perpetual) common law copyright. The Copyright Act of 1976, effective 1978, abolished common law copyright in the United States so that all works, published or unpublished, are now covered by federal statutory copyright. The claim that "pre-1923 works are in the public domain" is correct only for published works; unpublished works are under federal copyright for at least the life of the author plus 70 years. For a work made for hire, the copyright in a work created before 1978, but not theretofore in the public domain or registered for copyright, subsists from January 1, 1978, and endures for a term of 95 years from the year of its first publication, or a term of 120 years from the year of its creation, whichever expires first. If the work was created before 1978 but first published on or before December 31, 2002, the work is covered by federal copyright until 2047. ### 1964 to 1977 Works published with notice of copyright or registered in unpublished form prior to January 1, 1964, had to be renewed during the 28th year of their first term of copyright to maintain copyright for a full 95-year term. Until the Berne Convention Implementation Act of 1988, the lack of a proper copyright notice would place an otherwise copyrightable work into the public domain, although for works published between January 1, 1978 and February 28, 1989, this could be prevented by registering the work with the Library of Congress within 5 years of publication. After March 1, 1989, an author's copyright in a work begins when it is fixed in a tangible form; neither publication nor registration is required, and a lack of a copyright notice does not place the work into the public domain. ### Sound recordings Sound recordings fixed before February 15, 1972, were generally covered by common law or in some cases by statutes enacted in certain states, not by federal copyright law. The 1976 Copyright Act, effective 1978, provides federal copyright for unpublished and published sound recordings fixed on or after February 15, 1972. Recordings fixed before February 15, 1972, are still covered, to varying degrees, by common law or state statutes. Any rights or remedies under state law for sound recordings fixed before February 15, 1972, are not annulled or limited by the 1976 Copyright Act until February 15, 2067. ### Term extensions Critics of copyright term extensions have said that Congress has achieved a perpetual copyright term "on the installment plan." ## British law British government works are restricted by either Crown Copyright or Parliamentary Copyright. Published Crown Copyright works become public domain at the end of the year 50 years after they were published, unless the author of the work held copyright and assigned it to the Crown. In that case, the copyright term is the usual life of author plus 70 years. Unpublished Crown Copyright documents become public domain at the end of the year 125 years after they were first created. However, under the legislation that created this rule, and abolished the traditional common law perpetual copyright of unpublished works, no unpublished works will become public domain until 50 years after the legislation came into effect. Since the legislation became law on 1 August 1989, no unpublished works will become public domain under this provision until 2039. Parliamentary Copyright documents become public domain at the end of the year 50 years after they were published. Crown Copyright is waived on some government works provided that certain conditions are met. ## Laws of Canada, Australia, and other Commonwealth nations These numbers reflect the most recent extensions of copyright in the United States and Europe. Canada and New Zealand have not, as of 2006, passed similar twenty-year extensions. Consequently, their copyright expiry times are still life of the author plus 50 years. Australia passed a 20-year copyright extension in 2004, but delayed its effect until 2005, and did not make it revive already-expired copyrights. Hence, in Australia works by authors who died before 1955 are still in the public domain. As a result, works ranging from Peter Pan to the stories of H. P. Lovecraft are public domain in both countries. (The copyright status of Lovecraft's work is debatable, as no copyright renewals, which were necessary under the laws of that time, have been found. Also, two competing parties have independently claimed copyright ownership on his work.) As with most other Commonwealth of Nations countries, Canada and Australia follow the general lead of the United Kingdom on copyright of government works. Both have a version of Crown Copyright which lasts for 50 years from publication. New Zealand also has Crown Copyright, but has a much greater time length, at 100 years from the date of publication. India has a government copyright of sixty years from publication, to coincide with its somewhat unusual life of the author plus sixty years term of copyright. ## Thai law According to Thai copyright law, the copyright term is the life of author plus 50 years. When the author is a legal entity or an anonymous person, the copyright term is 50 years from the date of publication. Works of applied art (defined as drawings/paintings, sculpture, prints, architecture, photography, and drafts) have a copyright term of 50 years from publication. Republication of works after the expiration of the copyright term does not reset the copyright term. Thai state documents are public domain. ## Japanese law Japanese copyright law does not mention public domain. Hence, even when some materials are said to be "in the public domain" there can be some use restrictions. In that case, the term copyright-free is sometimes used instead. Many pre-1953 both Japanese and non-Japanese films are considered to be in the public domain in Japan. ## Examples Examples of inventions whose patents have expired include the inventions of Thomas Edison. Examples of works whose copyrights have expired include the works of Carlo Collodi, Mozart, and most of the works of Mark Twain, excluding the work first published in 2001, A Murder, a Mystery, and a Marriage. In the United States, the images of Frank Capra's classic film, It's a Wonderful Life (1946) entered into the public domain in 1974, because someone inadvertently failed to file a copyright renewal application with the Copyright Office during the 28th year after the film's release or publication. It wasn't until 1993 when Republic Pictures relied on the 1990 United States Supreme Court ruling in Stewart v. Abend to enforce its claim of copyright to portions of the film's sound track. As a result, only NBC is currently licensed to show the film on U.S. network television, the colourized versions have been withdrawn and Republic got exclusive video rights to the film (under license with Artisan Entertainment). Rights to It's a Wonderful Life now belong to Paramount Pictures. Currently four shorts by the Three Stooges are in the public domain due to accidental failure to renew their copyrights in the '60s. These are Disorder in the Court, Brideless Groom, Malice in the Palace, and Sing a Song of Six Pants. Other features and films from the Stooges are known to be in public domain as well. Several episodes of The Lucy Show are similarly in the public domain. Some works may never fully lapse into the public domain, such as the play Peter Pan by J. M. Barrie. While the copyright of this work expired in the United Kingdom in 1987, it has been granted special treatment under the Copyright, Designs and Patents Act 1988 (Schedule 6) that requires certain royalties to be paid for performances within the UK, so long as Great Ormond Street Hospital continues to exist. J. M. Barrie had bequeathed the rights to Peter Pan to the hospital in perpetuity as an endowment. # Disclaimer of interest Laws may make some types of works and inventions ineligible for monopoly; such works immediately enter the public domain upon publication. Many kinds of mental creations, such as publicized baseball statistics, are never covered by copyright. However, any special layout of baseball statistics, or the like, would be covered by copyright law. For example, while a phonebook is not covered by copyright law, any special method of laying out the information would be. For example: U.S. copyright law, 17 U.S.C. § 105, releases all works created by the U.S. government into the public domain. U.S. patent applications containing a copyright notice must also include a disclaimer of certain exclusive rights as part of the terms of granting the patent to the invention (leaving open the question regarding copyright of patents with no such notice). Agreements that Germany signed at the end of World War I released such trademarks as "aspirin" and "heroin" into the public domain in many areas. Another example would be Charles Darwin's theory of evolution. Being an abstract idea it has therefore never been patentable. After Darwin constructed his theory, he did not disclose it for over a decade (see Development of Darwin's theory). He could have kept his manuscript in his desk drawer forever but once he published the idea, the idea itself entered public domain. However, the carrier of his ideas, in the form of a book titled The Origin of Species, was covered by copyright (though, since he died in 1882, the copyright has since expired). ## Copyright In the past, in some jurisdictions such as the USA, a work would enter the public domain with respect to copyright if it was released without a copyright notice. This was true prior to March 1, 1989 (according to the USA Copyright office), but is no longer the case. Any work (of certain, enumerated types) receives copyright as soon as it is fixed in a tangible medium. It is commonly believed by non-lawyers that it is impossible to put a work into the public domain. Although copyright law generally does not provide any statutory means to "abandon" copyright so that a work can enter the public domain, this does not mean that it is impossible or even difficult, only that the law is somewhat unclear. Congress may not have felt it necessary to codify this part of the law, because abandoning property (like a tract of land) to the public domain has traditionally been a matter of common law, rather than statute. (Alternatively, because copyright has traditionally been seen as a valuable right, one which required registration to achieve, it would not have made sense to contemplate someone abandoning it in 1976 and 1988.) ### Statutory law There are several references to putting copyrighted work into the public domain. The first reference is actually in a statute passed by Congress, in the Computer Software Rental Amendments Act of 1990 (H.R. 5498 of the 101st Congress). Although most of the Act was codified into Title 17 of the U.S. Code, there is a very interesting provision relating to "public domain shareware" which was not, and is therefore often overlooked. Sec. 105. Recordation of Shareware (a) IN GENERAL- The Register of Copyrights is authorized, upon receipt of any document designated as pertaining to computer shareware and the fee prescribed by section 708 of title 17, United States Code, to record the document and return it with a certificate of recordation. (b) MAINTENANCE OF RECORDS; PUBLICATION OF INFORMATION- The Register of Copyrights is authorized to maintain current, separate records relating to the recordation of documents under subsection (a), and to compile and publish at periodic intervals information relating to such recordations. Such publications shall be offered for sale to the public at prices based on the cost of reproduction and distribution. (c) DEPOSIT OF COPIES IN LIBRARY OF CONGRESS- In the case of public domain computer shareware, at the election of the person recording a document under subsection (a), 2 complete copies of the best edition (as defined in section 101 of title 17, United States Code) of the computer shareware as embodied in machine-readable form may be deposited for the benefit of the Machine-Readable Collections Reading Room of the Library of Congress. (d) REGULATIONS- The Register of Copyrights is authorized to establish regulations not inconsistent with law for the administration of the functions of the Register under this section. All regulations established by the Register are subject to the approval of the Librarian of Congress. One purpose of this legislation appears to be to allow "public domain shareware" to be filed at the Library of Congress, presumably so that the shareware would be more widely disseminated. Therefore, one way to release computer software into the public domain might be to make the filing and pay the $20 fee. This could have the effect of "certifying" that the author intended to release the software into the public domain. It does not seem that registration is necessary to release the software into the public domain, because the law does not state that public domain status is conferred by registration. Judicial rulings supports this conclusion, see below. By comparing paragraph (a) and (c), one can see that Congress distinguishes "public domain" shareware as a special kind of shareware. Because this law was passed after the Berne Convention Implementation Act of 1988, Congress was well aware that newly created computer programs (two years worth, since the Berne Act was passed) would automatically have copyright attached. Therefore, one reasonable inference is that Congress intended that authors of shareware would have the power to release their programs into the public domain. This interpretation is followed by the Copyright Office in 37 C.F.R. § 201.26. The Berne Convention Implementation Act of 1988 states in section twelve that the Act "does not provide copyright protection for any work that is in the public domain." The congressional committee report explains that this means simply that the Act does not apply retroactively. Some interest groups lobbied heavily to make the Act retroactive in order to increase the U.S.'s negotiating leverage with other countries, because the U.S. often asks developing countries to allow the copyrighting of previously public-domain work. Although the only part of the act that does mention "public domain" does not speak to whether authors have the right to dedicate their work to the public domain, the remainder of the committee report does not say that they intended copyright should be an indestructible form of property. Rather the language speaks to getting rid of formalities in order to comply with Berne (non-compliance had become a severe impediment in trade negotiations) and making registration and marking optional, but encouraged. A fair reading is that the Berne Act did not intend to take away author's right to dedicate works to the public domain, which they had (by default) under the 1976 Act. Although there is support in the statutes for allowing work to be dedicated to the public domain, there cannot be an unlimited right to dedicate work to the public domain because of a quirk of U.S. copyright law which grants the author of a work the right to cancel "the exclusive or nonexclusive grant of a transfer or license of copyright or of any right under a copyright" thirty-five years later, unless the work was originally a work for hire. It is unsettled how this section would mesh with a purported public domain dedication. Any of these interpretations are possible: - No effect. Any holder of a copyright can release it to the public domain. This interpretation is probably wrong, because then an author would lose the right to his "termination right," which in practical terms means a royalty. To prevent paying the royalty, a comic book company could release the copyright to the public domain but hold onto the trademark, which would suffice to prevent knock-off comics from being made. Because the Captain America case (Marvel v. Simon) showed that this termination right cannot be alienated before death, this interpretation is almost certainly wrong. - Some effect. An author may release his own work into the public domain, and a company holding a work for hire may release his work into the public domain. But a company which has purchased a copyright from an author (as was the case with most of the "Golden Age" comic book writers) cannot. Although the distinction of allowing an author to release his own work is not explicit in the statute, it may not be literally inconsistent (it is not a "transfer" or a "license," and it arguably is not a grant of a right under copyright), and this reading is necessary to comply with the 1990 Act discussed above, as well as the case law discussed below. - Strong effect. Only a company holding a work for hire can release the work into the public domain. Because of the references to "shareware" (above) and "programmers" (below), and the fact that many software companies in the 1980s were quite small (and thus did not have employees), this reading seems inconsistent with the intent of Congress. ### Case law Another form of support comes from the seminal case Computer Associates Int'l v. Altai, 982 F.2d 693. This case set the standard for determining copyright infringement of computer software and is still followed today. Moreover, it was decided by the Second Circuit appellate court, which is famous for handing down some of the most well-reasoned American copyright decisions. In this case, it discusses the public domain. (c) Elements Taken from the Public Domain Closely related to the non-protectability of scenes a faire, is material found in the public domain. Such material is free for the taking and cannot be appropriated by a single author even though it is included in a copyrighted work. ... We see no reason to make an exception to this rule for elements of a computer program that have entered the public domain by virtue of freely accessible program exchanges and the like. See 3 Nimmer Section 13.03  ; see also Brown Bag Software, slip op. at 3732 (affirming the district court’s finding that “‘laintiffs may not claim copyright protection of an . . . expression that is, if not standard, then commonplace in the computer software industry.’“). Thus, a court must also filter out this material from the allegedly infringed program before it makes the final inquiry in its substantial similarity analysis. This decision holds that computer software may enter the public domain through "freely accessible program exchanges and the like," or by becoming "commonplace in the computer industry." Relying only on this decision, it is unclear whether an author can dedicate his work to the public domain simply by labeling it as such, or whether dedication to the public domain requires widespread dissemination. This could make a distinction in a CyberPatrol-like case, where a software program is released, leading to litigation, and as part of a settlement the author assigns his copyright. If the author has the power to release his work into the public domain, there would be no way for the new owner to stop the circulation of the program. A court may look on an attempt to abuse the public domain in this way with disfavor, particularly if the program has not been widely disseminated. Either way, a fair reading is that an author may choose to release a computer program to the public domain if he can arrange for it to become popular and widely disseminated. ### Treatise analysis The treatise cited (Nimmer), holds in its most recent edition: It is axiomatic that material in the public domain is not protected by copyright, even when incorporated into a copyrighted work. ... An enormous amount of public domain software exists in the computer industry, perhaps to a much greater extent than is true of other fields. Nationwide computer "bulletin boards" permit users to share and distribute programs. In addition, computer programming texts may contain examples of actual code that programmers are encouraged to copy. Programmers often will build existing public domain software into their works. The courts thus must be careful to limit protection only to those elements of the program that represent the author's original work. Although Computer Associates only mentioned the issue in passing, Nimmer observes that the public domain is particularly rich and valuable for computer programs. He seems to say that a computer program author who wishes to release his work into the public domain may either include it in a book as example code or post it on a "bulletin board" and encourage sharing and distribution. (Nimmer is the treatise most widely cited in copyright opinions, and is generally authoritative.) ## Patent With regard to patents, on the other hand, public use or publishing the details of an invention before applying for a patent will generally place an invention in the public domain and (in theory) prevent its subsequent patenting by anyone – an effective disclaimer. For example, a chemistry journal publishing a formula prevents patenting the formula by anyone. This tactic was commonly used by Bell Labs. The famous Bell Labs Technical Journal was sent free of charge to the library of the U.S. Patent Office to establish a base of prior art without the inconvenience, cost, and hassle of filing patent applications for inventions of no immediate monetary value. (Unix was famously described in this journal.) This is sometimes called "defensive disclosure" - one way to make sure you are not later accused of infringing a patent on your own invention. There is an exception to this rule, however: in U.S. (not European) law, an inventor may file a patent claim up to one year after publishing a description (but not, of course, if someone else published or used it first). In practice, patent examiners only consider other patents and the books they have in their library for prior art, largely because the patent office has an elaborate classification system for inventions. This means that an increasing number of issued patents may be invalid, based upon prior art that was not brought to the examiner's attention. Once a patent is issued, it is very expensive to invalidate. Publishing a description on a website as a preemptive disclosure does very little in a practical sense to release an invention to the public domain; it might still be considered "patentable", although erroneously. However, anyone aware of an omitted prior art citation in an issued patent may submit it to the US Patent Office and request a "reexamination" of the patent during the enforceable period of the patent (that is, its life plus statute of limitations). This may result in loss of some or all of the patent on the invention, or it may backfire and actually strengthen the claims. An applicant may also choose to file a Statutory Invention Registration, which has the same effect as a patent for prior art purposes. These SIRs are relatively expensive. These are used strategically by large companies to prevent competitors from obtaining a patent. Section 102(c) says that an invention that has been "abandoned" cannot be patented. There is precious little case-law on this point. It is largely a dead letter. If an inventor has an issued patent, there are several ways to release it to the public domain (other than simply letting it expire). First, he can fail to pay the maintenance fee the next time it is due, about every four years. Alternatively he can file a terminal disclaimer under 37 CFR 1.321 for a reasonable fee. The regulations explicitly say that the "patentee may disclaim or dedicate to the public the entire term, or any terminal part of the term, of the patent granted. Such disclaimer is binding upon the grantee and its successors or assigns." Usually this is used during the application process to prevent another patent from a "double patenting" invalidation. Lastly, he may grant a patent license to the world, although the issue of revocability may raise its head again. ## Trade secret If guarded properly, trade secrets are forever. A business may keep the formula to Coca-Cola a secret. However, once it is disclosed to the public, the former secret enters public domain, although an invention using the former secret may still be patentable in the United States if it is not barred by statute (including the on-sale bar). Some businesses choose to protect products, processes, and information by guarding them as trade secrets, rather than patenting them. Hershey Foods, Inc., for example, does not patent some of its processes, such as the recipe for Reese's, but rather maintains them as trade secrets, to prevent competitors from easily duplicating or learning from their invention disclosures. One risk, however, is that anyone may reverse engineer a product and thus discover (and copy and publish) all of its secrets, to the extent they are not covered by other laws (e.g., patent, contract). ## Trademark A trademark registration is renewable. If a trademark owner wishes to do so, he may maintain a registration indefinitely by paying renewal fees, using the trademark and defending the registration. However, a trademark or brand can become unenforceable if it becomes the generic term for a particular type of product or service – a process called "genericide." If a mark undergoes genericide, people are using the term generically, not as a trademark to exclusively identify the particular source of the product or service. One famous example is "thermos" in the United States. Because trademarks are registered with governments, some countries or trademark registries may recognize a mark, while others may have determined that it is generic and not allowable as a trademark in that registry. For example, the drug "acetylsalicylic acid" (2-acetoxybenzoic acid) is better known as aspirin in the United States – a generic term. In Canada, however, "aspirin" is still a trademark of the German company Bayer. Bayer lost the trademark after World War I, when the mark was sold to an American firm. So many copycat products entered the marketplace during the war that it was deemed generic just three years later. Terms can be deemed "generic" in two ways. First, any potential mark can be deemed "generic" by a trademark registry, that refuses to register it. In this instance, the term has no secondary meaning that helps consumers identify the source of the product; the term serves no function as a "mark". Second, a mark, already in use, may be deemed generic by a court or registry after the mark is challenged as generic – this is known as "genericide". In this instance, the term previously had a secondary meaning, but lost its source-identifying function. To avoid "genericide", a trademark owner must balance between trying to dominate the market, and dominating their market to such an extent that their product name defines the market. A manufacturer who invents an amazing breakthrough product which cannot be succinctly described in plain English (for example, a vacuum-insulated drinking flask) will likely find its product described by the trademark ("Thermos"). If the product continues to dominate the market, eventually the trademark will become generic ("thermos"). However, "genericide" is not an inevitable process. In the late 1980s "Nintendo" was becoming synonymous with home video game consoles but Nintendo was able to reverse this process through marketing campaigns. Xerox was also successful in avoiding its name becoming synonymous with the act of photocopying (although, in some languages (Russian) and countries (like India), it became generic). Trademarks currently thought to be in danger of being generic include Jell-O, Band-Aid, Rollerblade, Google, Spam, Hoover, and Sheetrock. Google vigorously defends its trademark rights. Although Hormel has resigned itself to genericide , it still fights attempts by other companies to register "spam" as a trademark in relation to computer products . When a trademark becomes generic, it is as if the mark were in the public domain. Trademarks which have been genericized in particular places include: Escalator, Trampoline, Raisin Bran, Linoleum, Dry Ice, Shredded Wheat (generic in US), Mimeograph, Yo-Yo, Kerosene, Cornflakes, Cube Steak, Lanolin, and High Octane, (Source: Xerox ad, reprinted in Copyright, Patent, Trademark, ..., by Paul Goldstein, 5th ed., p. 245) as well as Aspirin (generic in the United States, but not in Canada), Allen wrench, Beaver Board, Masonite, Coke, Pablum, Styrofoam, Heroin, Bikini, Chyron, Weedwhacker, Kleenex, Linux (generic in Australia) and Zipper. ## Domain name People may buy and sell domain names. Sometimes, people advertise them as their own "intellectual property". In early 2000, the record-breaker domain name "business.com" was sold for $8 million (this was resold in July 2007 for $345 million). A domain name never enters public domain. If nobody owns it, it simply doesn't exist. Top level domains, such as .com, are controlled by the ICANN (Internet Corporation for Assigned Names and Numbers). A domain name is sometimes described as a lease, but this has only a shred of truth in it. In fact it is much closer to a trademark. While a leaseholder of, say, real estate cannot be ejected from the property by anybody (except the government, in rare cases), domain names are subject to cybersquatting suits and trademark suits. # Public domain and the Internet The term "public domain" is often poorly understood and has created significant legal controversy. Historically, most parties attempting to address public domain issues fell into two camps: - Businesses and organizations who could devote staff to resolving legal conflicts through negotiation and the court system. - Individuals and organizations using materials covered by the fair use doctrine, reducing the need for substantial governmental or corporate resources to track down individual offenders. With the advent of the Internet, however, it became possible for anybody with access to this worldwide network to "post" copyrighted or otherwise-licensed materials freely and easily. This aggravated an already established but false belief that if something is available through a free source, it must be public domain. Once such material was available on the net, it could be perfectly copied among thousands or even millions of computers very quickly and essentially without cost. ## Freely obtained does not mean free to republish These factors have reinforced the false notion that "freely obtained" means "public domain." One could argue that the Internet is a publicly-available domain, not licensed or controlled by any individual, company, or government; therefore, everything on the Internet is public domain. This specious argument ignores the fact that licensing rights are not dependent on the means of distribution or consumer acquisition. (If someone gives a person stolen merchandise, it is still stolen, even if the receiving party was not aware of it.) Chasing down copyright violations based on the idea that information is inherently free has become a primary focus of industries whose financial structure is based on their control of the distribution of such media. ## (Almost) everything written down is copyrighted Another complication is that publishing exclusively on the Internet has become extremely popular. In countries party to the Berne Convention, an author's original works are covered by copyright as soon as the work is put into a "fixed" form; no formal copyright notice or registration is necessary. But such laws were passed at a time when the focus was on materials that could not be as easily and cheaply reproduced as digital media, nor did they comprehend the ultimate impossibility of determining which set of electronic bits is original. Technically, any Internet posting (such as blogs or emails) could be considered copyrighted material unless explicitly stated otherwise. The distribution of many types of Internet postings (particularly Usenet articles and messages sent to electronic mailing lists) inherently involves duplication. The act of posting such a work can therefore be taken to imply consent to a certain amount of copying, as dictated by the technical details of the manner of distribution. However, it does not necessarily imply total waiver of copyright. ## Furthering the public domain with the Internet Many people are using the Internet to contribute to the public domain, or make works in the public domain more accessible to more people. For example, Project Gutenberg and LibriVox coordinate the efforts of people who transcribe works in the public domain into electronic form. Some projects exist for the sole purpose of making material available into the public domain or under no-cost licenses. The IMSLP (International Music Score Library Project) is attempting to create a virtual library containing all public domain musical scores, as well as scores from composers who are willing to share their music with the world free of charge. Note that there are many works that are not part of the public domain, but for which the owner of some proprietary rights has chosen not to enforce those rights, or to grant some subset of those rights to the public. See, for example, the Free Software Foundation which creates copyrighted software and licenses it without charge to the public for most uses under a class of license called "copyleft", forbidding only proprietary redistribution. Wikipedia does much the same thing with its content under the GNU Free Documentation License. Sometimes such work is inadvertently referred to as "public domain" in colloquial speech. Note also that while some works (especially musical works) may be in the public domain, U.S. law considers performances or (some) transcriptions of those works to be derivative works, potentially subject to their own copyrights. Similarly, a film adaptation of a public-domain story (such as a fairy tale or a classic work of literature) may itself be copyrightable. ## Kopimi There is an established form of copyright antonym called kopimi, a wordplay on "copy me." Kopimi is not a license, it is simply a message that expresses the author's desire for people to modify and distribute the work. ## Media in the public domain There are hundreds of movies, cartoons and television shows that have fallen into the public domain. Some of these movies are considered classics, such as The Gold Rush (1925) starring Charlie Chaplin, A Star Is Born (1937), and Night of the Living Dead (1968). The works either did not include a proper copyright notice when published, or the copyright was not renewed and therefore the content is now in the public domain.
Public domain Editor-In-Chief: C. Michael Gibson, M.S., M.D. [3] Please Take Over This Page and Apply to be Editor-In-Chief for this topic: There can be one or more than one Editor-In-Chief. You may also apply to be an Associate Editor-In-Chief of one of the subtopics below. Please mail us [4] to indicate your interest in serving either as an Editor-In-Chief of the entire topic or as an Associate Editor-In-Chief for a subtopic. Please be sure to attach your CV and or biographical sketch. # Overview Public domain comprises the body of knowledge and innovation (especially creative works such as writing, art, music, and inventions) in relation to which no person or other legal entity can establish or maintain proprietary interests within a particular legal jurisdiction. This body of information and creativity is considered to be part of a common cultural and intellectual heritage, which, in general, anyone may use or exploit, whether for commercial or non-commercial purposes. About 15 percent of all books are in the public domain, including 10 percent of all books that are still in print.[1] If an item ("work") is not in the public domain, it may be the result of a proprietary interest such as a copyright, patent, or other sui generis right. The extent to which members of the public may use or exploit the work is limited to the extent of the proprietary interests in the relevant legal jurisdiction. However, when the copyright, patent or other proprietary restrictions expire, the work enters the public domain and may be used by anyone for any purpose. # No legal restriction on use A creative work is said to be in the public domain if there are no laws which restrict its use by the public at large. For instance, a work may be in the public domain if no laws establish proprietary rights over the work, or if the work or its subject matter are specifically excluded from existing laws. Because proprietary rights are founded in national laws, an item may be public domain in one jurisdiction but not another. For instance, some works of literature are public domain in the United States but not in the European Union and vice versa. The underlying idea that is expressed or manifested in the creation of a work generally cannot be the subject of copyright law (see idea-expression divide). Mathematical formulae will therefore generally form part of the public domain, to the extent that their expression in the form of software is not covered by copyright; however, algorithms can be the subject of a software patent in some jurisdictions.[2][3] Works created before the existence of copyright and patent laws also form part of the public domain. The Bible and the inventions of Archimedes are in the public domain. However, copyright may exist in translations or new formulations of these works. Although "intellectual property" laws are not designed to prevent facts from entering the public domain, collections of facts organized or presented in a creative way, such as categorized lists, may be copyrighted. Collections of data with intuitive organization, such as alphabetized directories like telephone directories, are generally not copyrightable. In some countries copyright-like rights are granted for databases, even those containing mere facts. A sui generis database rights regime is in place in the European Union. Works of the United States Government and various other governments are excluded from copyright law and may therefore be considered to be in the public domain in their respective countries.[4] They may also be in the public domain in other countries as well. # Expiration All copyrights and patents have always had a finite term, though the terms for copyrights and patents differ. When terms expire, the work or invention is released into public domain, in most countries, this is 20 years. A trademark registration may be renewed and remain in force indefinitely provided the trademark is used, but could otherwise become generic. Copyrights are more complex than patents; generally, in current law, the copyright in a published work expires in all countries (except Colombia, Côte d'Ivoire, Guatemala, Honduras, Mexico, Samoa, and Saint Vincent and the Grenadines) when any of the following conditions are satisfied :[5] - The work was created and first published before January 1, 1923, or at least 95 years before January 1 of the current year, whichever is later; - The last surviving author died at least 70 years before January 1 of the current year; - No Berne Convention signatory has passed a perpetual copyright on the work; and - Neither the United States nor the European Union has passed a copyright term extension since these conditions were last updated. (This must be a condition because the exact numbers in the other conditions depend on the state of the law at any given moment.) These conditions are based on the intersection of United States and European Union copyright law, which most other Berne Convention signatories recognize. Note that copyright term extension under U.S. tradition usually does not restore copyright to public domain works (hence the 1923 date), but European tradition does because the EU harmonization was based on the copyright term in Germany, which had already been extended to life plus 70. ## United States law Copyright law in the United States has changed several times. Although it is held under Feist v. Rural that Congress does not have the power to re-copyright works that have fallen into the public domain, re-copyrighting has happened: "After World War I and after World War II, there were special amendments to the Copyright Act to permit for a limited time and under certain conditions the recapture of works that might have fallen into the public domain, principally by aliens of countries with which we had been at war."[6] Works created by an agency of the United States government are public domain at the moment of creation.[7] Examples include military journalism, federal court opinions (but not necessarily state court opinions), congressional committee reports, and census data. However, works created by a contractor for the government are still subject to copyright. Even public domain documents may have their availability limited by laws limiting the spread of classified information. ### Since 1978 Before 1978, unpublished works were not covered by the federal copyright act. Rather, they were covered under (perpetual) common law copyright. The Copyright Act of 1976, effective 1978, abolished common law copyright in the United States so that all works, published or unpublished, are now covered by federal statutory copyright. The claim that "pre-1923 works are in the public domain" is correct only for published works; unpublished works are under federal copyright for at least the life of the author plus 70 years. For a work made for hire, the copyright in a work created before 1978, but not theretofore in the public domain or registered for copyright, subsists from January 1, 1978, and endures for a term of 95 years from the year of its first publication, or a term of 120 years from the year of its creation, whichever expires first.[8] If the work was created before 1978 but first published on or before December 31, 2002, the work is covered by federal copyright until 2047. ### 1964 to 1977 Works published with notice of copyright or registered in unpublished form prior to January 1, 1964, had to be renewed during the 28th year of their first term of copyright to maintain copyright for a full 95-year term.[9] Until the Berne Convention Implementation Act of 1988, the lack of a proper copyright notice would place an otherwise copyrightable work into the public domain, although for works published between January 1, 1978 and February 28, 1989, this could be prevented by registering the work with the Library of Congress within 5 years of publication. After March 1, 1989, an author's copyright in a work begins when it is fixed in a tangible form; neither publication nor registration is required, and a lack of a copyright notice does not place the work into the public domain. ### Sound recordings Sound recordings fixed before February 15, 1972, were generally covered by common law or in some cases by statutes enacted in certain states, not by federal copyright law. The 1976 Copyright Act, effective 1978, provides federal copyright for unpublished and published sound recordings fixed on or after February 15, 1972. Recordings fixed before February 15, 1972, are still covered, to varying degrees, by common law or state statutes.[10] Any rights or remedies under state law for sound recordings fixed before February 15, 1972, are not annulled or limited by the 1976 Copyright Act until February 15, 2067.[11] ### Term extensions Critics of copyright term extensions have said that Congress has achieved a perpetual copyright term "on the installment plan."[12] ## British law British government works are restricted by either Crown Copyright or Parliamentary Copyright. Published Crown Copyright works become public domain at the end of the year 50 years after they were published, unless the author of the work held copyright and assigned it to the Crown. In that case, the copyright term is the usual life of author plus 70 years. Unpublished Crown Copyright documents become public domain at the end of the year 125 years after they were first created. However, under the legislation that created this rule, and abolished the traditional common law perpetual copyright of unpublished works, no unpublished works will become public domain until 50 years after the legislation came into effect. Since the legislation became law on 1 August 1989, no unpublished works will become public domain under this provision until 2039. Parliamentary Copyright documents become public domain at the end of the year 50 years after they were published. Crown Copyright is waived on some government works provided that certain conditions are met. ## Laws of Canada, Australia, and other Commonwealth nations These numbers reflect the most recent extensions of copyright in the United States and Europe. Canada and New Zealand have not, as of 2006, passed similar twenty-year extensions. Consequently, their copyright expiry times are still life of the author plus 50 years. Australia passed a 20-year copyright extension in 2004, but delayed its effect until 2005, and did not make it revive already-expired copyrights. Hence, in Australia works by authors who died before 1955 are still in the public domain. As a result, works ranging from Peter Pan to the stories of H. P. Lovecraft are public domain in both countries. (The copyright status of Lovecraft's work is debatable, as no copyright renewals, which were necessary under the laws of that time, have been found. Also, two competing parties have independently claimed copyright ownership on his work.) As with most other Commonwealth of Nations countries, Canada and Australia follow the general lead of the United Kingdom on copyright of government works. Both have a version of Crown Copyright which lasts for 50 years from publication. New Zealand also has Crown Copyright, but has a much greater time length, at 100 years from the date of publication. India has a government copyright of sixty years from publication, to coincide with its somewhat unusual life of the author plus sixty years term of copyright. ## Thai law According to Thai copyright law, the copyright term is the life of author plus 50 years.[13] When the author is a legal entity or an anonymous person, the copyright term is 50 years from the date of publication. Works of applied art (defined as drawings/paintings, sculpture, prints, architecture, photography, and drafts) have a copyright term of 50 years from publication.[14] Republication of works after the expiration of the copyright term does not reset the copyright term. Thai state documents are public domain.[15] ## Japanese law Japanese copyright law does not mention public domain. Hence, even when some materials are said to be "in the public domain" there can be some use restrictions. In that case, the term copyright-free is sometimes used instead. Many pre-1953 both Japanese and non-Japanese films are considered to be in the public domain in Japan.[16] ## Examples Examples of inventions whose patents have expired include the inventions of Thomas Edison. Examples of works whose copyrights have expired include the works of Carlo Collodi, Mozart, and most of the works of Mark Twain, excluding the work first published in 2001, A Murder, a Mystery, and a Marriage. In the United States, the images of Frank Capra's classic film, It's a Wonderful Life (1946) entered into the public domain in 1974, because someone inadvertently failed to file a copyright renewal application with the Copyright Office during the 28th year after the film's release or publication. It wasn't until 1993 when Republic Pictures relied on the 1990 United States Supreme Court ruling in Stewart v. Abend to enforce its claim of copyright to portions of the film's sound track. As a result, only NBC is currently licensed to show the film on U.S. network television, the colourized versions have been withdrawn and Republic got exclusive video rights to the film (under license with Artisan Entertainment). Rights to It's a Wonderful Life now belong to Paramount Pictures. Currently four shorts by the Three Stooges are in the public domain due to accidental failure to renew their copyrights in the '60s. These are Disorder in the Court, Brideless Groom, Malice in the Palace, and Sing a Song of Six Pants. Other features and films from the Stooges are known to be in public domain as well. Several episodes of The Lucy Show are similarly in the public domain. Some works may never fully lapse into the public domain, such as the play Peter Pan by J. M. Barrie. While the copyright of this work expired in the United Kingdom in 1987, it has been granted special treatment under the Copyright, Designs and Patents Act 1988 (Schedule 6)[5] that requires certain royalties to be paid for performances within the UK, so long as Great Ormond Street Hospital continues to exist. J. M. Barrie had bequeathed the rights to Peter Pan to the hospital in perpetuity as an endowment. # Disclaimer of interest Laws may make some types of works and inventions ineligible for monopoly; such works immediately enter the public domain upon publication. Many kinds of mental creations, such as publicized baseball statistics, are never covered by copyright. However, any special layout of baseball statistics, or the like, would be covered by copyright law. For example, while a phonebook is not covered by copyright law, any special method of laying out the information would be. For example: U.S. copyright law, 17 U.S.C. § 105, releases all works created by the U.S. government into the public domain. U.S. patent applications containing a copyright notice must also include a disclaimer of certain exclusive rights as part of the terms of granting the patent to the invention (leaving open the question regarding copyright of patents with no such notice). Agreements that Germany signed at the end of World War I released such trademarks as "aspirin" and "heroin" into the public domain in many areas. Another example would be Charles Darwin's theory of evolution. Being an abstract idea it has therefore never been patentable. After Darwin constructed his theory, he did not disclose it for over a decade (see Development of Darwin's theory). He could have kept his manuscript in his desk drawer forever but once he published the idea, the idea itself entered public domain. However, the carrier of his ideas, in the form of a book titled The Origin of Species, was covered by copyright (though, since he died in 1882, the copyright has since expired). ## Copyright In the past, in some jurisdictions such as the USA, a work would enter the public domain with respect to copyright if it was released without a copyright notice. This was true prior to March 1, 1989 (according to the USA Copyright office), but is no longer the case. Any work (of certain, enumerated types) receives copyright as soon as it is fixed in a tangible medium. It is commonly believed by non-lawyers that it is impossible to put a work into the public domain. Although copyright law generally does not provide any statutory means to "abandon" copyright so that a work can enter the public domain, this does not mean that it is impossible or even difficult, only that the law is somewhat unclear. Congress may not have felt it necessary to codify this part of the law, because abandoning property (like a tract of land) to the public domain has traditionally been a matter of common law, rather than statute. (Alternatively, because copyright has traditionally been seen as a valuable right, one which required registration to achieve, it would not have made sense to contemplate someone abandoning it in 1976 and 1988.) ### Statutory law There are several references to putting copyrighted work into the public domain. The first reference is actually in a statute passed by Congress, in the Computer Software Rental Amendments Act of 1990 (H.R. 5498 of the 101st Congress). Although most of the Act was codified into Title 17 of the U.S. Code, there is a very interesting provision relating to "public domain shareware" which was not, and is therefore often overlooked. Sec. 105. Recordation of Shareware (a) IN GENERAL- The Register of Copyrights is authorized, upon receipt of any document designated as pertaining to computer shareware and the fee prescribed by section 708 of title 17, United States Code, to record the document and return it with a certificate of recordation. (b) MAINTENANCE OF RECORDS; PUBLICATION OF INFORMATION- The Register of Copyrights is authorized to maintain current, separate records relating to the recordation of documents under subsection (a), and to compile and publish at periodic intervals information relating to such recordations. Such publications shall be offered for sale to the public at prices based on the cost of reproduction and distribution. (c) DEPOSIT OF COPIES IN LIBRARY OF CONGRESS- In the case of public domain computer shareware, at the election of the person recording a document under subsection (a), 2 complete copies of the best edition (as defined in section 101 of title 17, United States Code) of the computer shareware as embodied in machine-readable form may be deposited for the benefit of the Machine-Readable Collections Reading Room of the Library of Congress. (d) REGULATIONS- The Register of Copyrights is authorized to establish regulations not inconsistent with law for the administration of the functions of the Register under this section. All regulations established by the Register are subject to the approval of the Librarian of Congress. One purpose of this legislation appears to be to allow "public domain shareware" to be filed at the Library of Congress, presumably so that the shareware would be more widely disseminated. Therefore, one way to release computer software into the public domain might be to make the filing and pay the $20 fee. This could have the effect of "certifying" that the author intended to release the software into the public domain. It does not seem that registration is necessary to release the software into the public domain, because the law does not state that public domain status is conferred by registration. Judicial rulings supports this conclusion, see below. By comparing paragraph (a) and (c), one can see that Congress distinguishes "public domain" shareware as a special kind of shareware. Because this law was passed after the Berne Convention Implementation Act of 1988, Congress was well aware that newly created computer programs (two years worth, since the Berne Act was passed) would automatically have copyright attached. Therefore, one reasonable inference is that Congress intended that authors of shareware would have the power to release their programs into the public domain. This interpretation is followed by the Copyright Office in 37 C.F.R. § 201.26. The Berne Convention Implementation Act of 1988 states in section twelve that the Act "does not provide copyright protection for any work that is in the public domain." The congressional committee report explains that this means simply that the Act does not apply retroactively. Some interest groups lobbied heavily to make the Act retroactive in order to increase the U.S.'s negotiating leverage with other countries, because the U.S. often asks developing countries to allow the copyrighting of previously public-domain work. Although the only part of the act that does mention "public domain" does not speak to whether authors have the right to dedicate their work to the public domain, the remainder of the committee report does not say that they intended copyright should be an indestructible form of property. Rather the language speaks to getting rid of formalities in order to comply with Berne (non-compliance had become a severe impediment in trade negotiations) and making registration and marking optional, but encouraged. A fair reading is that the Berne Act did not intend to take away author's right to dedicate works to the public domain, which they had (by default) under the 1976 Act. Although there is support in the statutes for allowing work to be dedicated to the public domain, there cannot be an unlimited right to dedicate work to the public domain because of a quirk of U.S. copyright law which grants the author of a work the right to cancel "the exclusive or nonexclusive grant of a transfer or license of copyright or of any right under a copyright" thirty-five years later, unless the work was originally a work for hire. [17] It is unsettled how this section would mesh with a purported public domain dedication. Any of these interpretations are possible: - No effect. Any holder of a copyright can release it to the public domain. This interpretation is probably wrong, because then an author would lose the right to his "termination right," which in practical terms means a royalty. To prevent paying the royalty, a comic book company could release the copyright to the public domain but hold onto the trademark, which would suffice to prevent knock-off comics from being made. Because the Captain America case (Marvel v. Simon) showed that this termination right cannot be alienated before death, this interpretation is almost certainly wrong. - Some effect. An author may release his own work into the public domain, and a company holding a work for hire may release his work into the public domain. But a company which has purchased a copyright from an author (as was the case with most of the "Golden Age" comic book writers) cannot. Although the distinction of allowing an author to release his own work is not explicit in the statute, it may not be literally inconsistent (it is not a "transfer" or a "license," and it arguably is not a grant of a right under copyright), and this reading is necessary to comply with the 1990 Act discussed above, as well as the case law discussed below. - Strong effect. Only a company holding a work for hire can release the work into the public domain. Because of the references to "shareware" (above) and "programmers" (below), and the fact that many software companies in the 1980s were quite small (and thus did not have employees), this reading seems inconsistent with the intent of Congress. ### Case law Another form of support comes from the seminal case Computer Associates Int'l v. Altai, 982 F.2d 693. This case set the standard for determining copyright infringement of computer software and is still followed today. Moreover, it was decided by the Second Circuit appellate court, which is famous for handing down some of the most well-reasoned American copyright decisions. In this case, it discusses the public domain. (c) Elements Taken from the Public Domain Closely related to the non-protectability of scenes a faire, is material found in the public domain. Such material is free for the taking and cannot be appropriated by a single author even though it is included in a copyrighted work. ... We see no reason to make an exception to this rule for elements of a computer program that have entered the public domain by virtue of freely accessible program exchanges and the like. See 3 Nimmer Section 13.03 [F] ; see also Brown Bag Software, slip op. at 3732 (affirming the district court’s finding that “‘[p]laintiffs may not claim copyright protection of an . . . expression that is, if not standard, then commonplace in the computer software industry.’“). Thus, a court must also filter out this material from the allegedly infringed program before it makes the final inquiry in its substantial similarity analysis. This decision holds that computer software may enter the public domain through "freely accessible program exchanges and the like," or by becoming "commonplace in the computer industry." Relying only on this decision, it is unclear whether an author can dedicate his work to the public domain simply by labeling it as such, or whether dedication to the public domain requires widespread dissemination. This could make a distinction in a CyberPatrol-like case, where a software program is released, leading to litigation, and as part of a settlement the author assigns his copyright. If the author has the power to release his work into the public domain, there would be no way for the new owner to stop the circulation of the program. A court may look on an attempt to abuse the public domain in this way with disfavor, particularly if the program has not been widely disseminated. Either way, a fair reading is that an author may choose to release a computer program to the public domain if he can arrange for it to become popular and widely disseminated. ### Treatise analysis The treatise cited (Nimmer), holds in its most recent edition: 13.03[F][4] It is axiomatic that material in the public domain is not protected by copyright, even when incorporated into a copyrighted work. ... An enormous amount of public domain software exists in the computer industry, perhaps to a much greater extent than is true of other fields. Nationwide computer "bulletin boards" permit users to share and distribute programs. In addition, computer programming texts may contain examples of actual code that programmers are encouraged to copy. Programmers often will build existing public domain software into their works. The courts thus must be careful to limit protection only to those elements of the program that represent the author's original work. Although Computer Associates only mentioned the issue in passing, Nimmer observes that the public domain is particularly rich and valuable for computer programs. He seems to say that a computer program author who wishes to release his work into the public domain may either include it in a book as example code or post it on a "bulletin board" and encourage sharing and distribution. (Nimmer is the treatise most widely cited in copyright opinions, and is generally authoritative.) ## Patent With regard to patents, on the other hand, public use or publishing the details of an invention before applying for a patent will generally place an invention in the public domain and (in theory) prevent its subsequent patenting by anyone – an effective disclaimer. For example, a chemistry journal publishing a formula prevents patenting the formula by anyone. This tactic was commonly used by Bell Labs. The famous Bell Labs Technical Journal was sent free of charge to the library of the U.S. Patent Office to establish a base of prior art without the inconvenience, cost, and hassle of filing patent applications for inventions of no immediate monetary value. (Unix was famously described in this journal.) This is sometimes called "defensive disclosure" - one way to make sure you are not later accused of infringing a patent on your own invention. There is an exception to this rule, however: in U.S. (not European) law, an inventor may file a patent claim up to one year after publishing a description (but not, of course, if someone else published or used it first). In practice, patent examiners only consider other patents and the books they have in their library for prior art, largely because the patent office has an elaborate classification system for inventions. This means that an increasing number of issued patents may be invalid, based upon prior art that was not brought to the examiner's attention. Once a patent is issued, it is very expensive to invalidate. Publishing a description on a website as a preemptive disclosure does very little in a practical sense to release an invention to the public domain; it might still be considered "patentable", although erroneously. However, anyone aware of an omitted prior art citation in an issued patent may submit it to the US Patent Office and request a "reexamination" of the patent during the enforceable period of the patent (that is, its life plus statute of limitations). This may result in loss of some or all of the patent on the invention, or it may backfire and actually strengthen the claims. An applicant may also choose to file a Statutory Invention Registration, which has the same effect as a patent for prior art purposes. These SIRs are relatively expensive. These are used strategically by large companies to prevent competitors from obtaining a patent. Section 102(c) says that an invention that has been "abandoned" cannot be patented. There is precious little case-law on this point. It is largely a dead letter. If an inventor has an issued patent, there are several ways to release it to the public domain (other than simply letting it expire). First, he can fail to pay the maintenance fee the next time it is due, about every four years. Alternatively he can file a terminal disclaimer under 37 CFR 1.321 for a reasonable fee. The regulations explicitly say that the "patentee may disclaim or dedicate to the public the entire term, or any terminal part of the term, of the patent granted. Such disclaimer is binding upon the grantee and its successors or assigns." Usually this is used during the application process to prevent another patent from a "double patenting" invalidation. Lastly, he may grant a patent license to the world, although the issue of revocability may raise its head again. ## Trade secret If guarded properly, trade secrets are forever. A business may keep the formula to Coca-Cola a secret. However, once it is disclosed to the public, the former secret enters public domain, although an invention using the former secret may still be patentable in the United States if it is not barred by statute (including the on-sale bar)[6]. Some businesses choose to protect products, processes, and information by guarding them as trade secrets, rather than patenting them. Hershey Foods, Inc., for example, does not patent some of its processes, such as the recipe for Reese's, but rather maintains them as trade secrets, to prevent competitors from easily duplicating or learning from their invention disclosures. One risk, however, is that anyone may reverse engineer a product and thus discover (and copy and publish) all of its secrets, to the extent they are not covered by other laws (e.g., patent, contract). ## Trademark A trademark registration is renewable. If a trademark owner wishes to do so, he may maintain a registration indefinitely by paying renewal fees, using the trademark and defending the registration. However, a trademark or brand can become unenforceable if it becomes the generic term for a particular type of product or service – a process called "genericide." If a mark undergoes genericide, people are using the term generically, not as a trademark to exclusively identify the particular source of the product or service. One famous example is "thermos" in the United States. Because trademarks are registered with governments, some countries or trademark registries may recognize a mark, while others may have determined that it is generic and not allowable as a trademark in that registry. For example, the drug "acetylsalicylic acid" (2-acetoxybenzoic acid) is better known as aspirin in the United States – a generic term. In Canada, however, "aspirin" is still a trademark of the German company Bayer. Bayer lost the trademark after World War I, when the mark was sold to an American firm. So many copycat products entered the marketplace during the war that it was deemed generic just three years later.[18] Terms can be deemed "generic" in two ways. First, any potential mark can be deemed "generic" by a trademark registry, that refuses to register it. In this instance, the term has no secondary meaning that helps consumers identify the source of the product; the term serves no function as a "mark". Second, a mark, already in use, may be deemed generic by a court or registry after the mark is challenged as generic – this is known as "genericide". In this instance, the term previously had a secondary meaning, but lost its source-identifying function. To avoid "genericide", a trademark owner must balance between trying to dominate the market, and dominating their market to such an extent that their product name defines the market. A manufacturer who invents an amazing breakthrough product which cannot be succinctly described in plain English (for example, a vacuum-insulated drinking flask) will likely find its product described by the trademark ("Thermos"). If the product continues to dominate the market, eventually the trademark will become generic ("thermos"). However, "genericide" is not an inevitable process. In the late 1980s "Nintendo" was becoming synonymous with home video game consoles but Nintendo was able to reverse this process through marketing campaigns. Xerox was also successful in avoiding its name becoming synonymous with the act of photocopying (although, in some languages (Russian) and countries (like India), it became generic). Trademarks currently thought to be in danger of being generic include Jell-O, Band-Aid, Rollerblade, Google, Spam, Hoover, and Sheetrock. Google vigorously defends its trademark rights. Although Hormel has resigned itself to genericide [7], it still fights attempts by other companies to register "spam" as a trademark in relation to computer products [8]. When a trademark becomes generic, it is as if the mark were in the public domain. Trademarks which have been genericized in particular places include: Escalator, Trampoline, Raisin Bran, Linoleum, Dry Ice, Shredded Wheat (generic in US), Mimeograph, Yo-Yo, Kerosene, Cornflakes, Cube Steak, Lanolin, and High Octane, (Source: Xerox ad, reprinted in Copyright, Patent, Trademark, ..., by Paul Goldstein, 5th ed., p. 245) as well as Aspirin (generic in the United States, but not in Canada), Allen wrench, Beaver Board, Masonite, Coke, Pablum, Styrofoam, Heroin, Bikini, Chyron, Weedwhacker, Kleenex, Linux (generic in Australia) and Zipper. ## Domain name People may buy and sell domain names. Sometimes, people advertise them as their own "intellectual property". In early 2000, the record-breaker domain name "business.com" was sold for $8 million (this was resold in July 2007 for $345 million[19]). A domain name never enters public domain. If nobody owns it, it simply doesn't exist. Top level domains, such as .com, are controlled by the ICANN (Internet Corporation for Assigned Names and Numbers). A domain name is sometimes described as a lease, but this has only a shred of truth in it. In fact it is much closer to a trademark. While a leaseholder of, say, real estate cannot be ejected from the property by anybody (except the government, in rare cases), domain names are subject to cybersquatting suits and trademark suits. # Public domain and the Internet The term "public domain" is often poorly understood and has created significant legal controversy. Historically, most parties attempting to address public domain issues fell into two camps: - Businesses and organizations who could devote staff to resolving legal conflicts through negotiation and the court system. - Individuals and organizations using materials covered by the fair use doctrine, reducing the need for substantial governmental or corporate resources to track down individual offenders. With the advent of the Internet, however, it became possible for anybody with access to this worldwide network to "post" copyrighted or otherwise-licensed materials freely and easily. This aggravated an already established but false belief that if something is available through a free source, it must be public domain. Once such material was available on the net, it could be perfectly copied among thousands or even millions of computers very quickly and essentially without cost. ## Freely obtained does not mean free to republish These factors have reinforced the false notion that "freely obtained" means "public domain." One could argue that the Internet is a publicly-available domain, not licensed or controlled by any individual, company, or government; therefore, everything on the Internet is public domain. This specious argument ignores the fact that licensing rights are not dependent on the means of distribution or consumer acquisition. (If someone gives a person stolen merchandise, it is still stolen, even if the receiving party was not aware of it.) Chasing down copyright violations based on the idea that information is inherently free has become a primary focus of industries whose financial structure is based on their control of the distribution of such media. ## (Almost) everything written down is copyrighted Another complication is that publishing exclusively on the Internet has become extremely popular. In countries party to the Berne Convention, an author's original works are covered by copyright as soon as the work is put into a "fixed" form; no formal copyright notice or registration is necessary. But such laws were passed at a time when the focus was on materials that could not be as easily and cheaply reproduced as digital media, nor did they comprehend the ultimate impossibility of determining which set of electronic bits is original. Technically, any Internet posting (such as blogs or emails) could be considered copyrighted material unless explicitly stated otherwise. The distribution of many types of Internet postings (particularly Usenet articles and messages sent to electronic mailing lists) inherently involves duplication. The act of posting such a work can therefore be taken to imply consent to a certain amount of copying, as dictated by the technical details of the manner of distribution. However, it does not necessarily imply total waiver of copyright. ## Furthering the public domain with the Internet Many people are using the Internet to contribute to the public domain, or make works in the public domain more accessible to more people. For example, Project Gutenberg and LibriVox coordinate the efforts of people who transcribe works in the public domain into electronic form. Some projects exist for the sole purpose of making material available into the public domain or under no-cost licenses. The IMSLP (International Music Score Library Project) is attempting to create a virtual library containing all public domain musical scores, as well as scores from composers who are willing to share their music with the world free of charge. Note that there are many works that are not part of the public domain, but for which the owner of some proprietary rights has chosen not to enforce those rights, or to grant some subset of those rights to the public. See, for example, the Free Software Foundation which creates copyrighted software and licenses it without charge to the public for most uses under a class of license called "copyleft", forbidding only proprietary redistribution. Wikipedia does much the same thing with its content under the GNU Free Documentation License. Sometimes such work is inadvertently referred to as "public domain" in colloquial speech. Note also that while some works (especially musical works) may be in the public domain, U.S. law considers performances or (some) transcriptions of those works to be derivative works, potentially subject to their own copyrights. Similarly, a film adaptation of a public-domain story (such as a fairy tale or a classic work of literature) may itself be copyrightable. ## Kopimi There is an established form of copyright antonym called kopimi, a wordplay on "copy me." Kopimi is not a license, it is simply a message that expresses the author's desire for people to modify and distribute the work. ## Media in the public domain There are hundreds of movies, cartoons and television shows that have fallen into the public domain. Some of these movies are considered classics, such as The Gold Rush (1925) starring Charlie Chaplin, A Star Is Born (1937), and Night of the Living Dead (1968). The works either did not include a proper copyright notice when published, or the copyright was not renewed and therefore the content is now in the public domain.
https://www.wikidoc.org/index.php/Public_domain
e88ac26c7ed2aa2fe5f94ffeef406a2530938374
wikidoc
Purdue Pharma
Purdue Pharma Purdue Pharma L.P., is privately-held pharmaceutical company founded by physicians. It is located in Stamford, Connecticut. Purdue is best known for painkillers, but they have also branched into other areas such as oncology and nutraceuticals. In its early years, Purdue was known for its antiseptic product, Betadine Solution, and its Senokot laxatives. Today, it is best known for its products for the treatment of pain; MS Contin Tablets and OxyContin Tablets. # Prescription drug abuse Purdue has also been involved in measures against prescription drug abuse, particularly of its well known Oxycontin brand. In 2001, Connecticut Attorney General Richard Blumenthal issued a statement urging Purdue to take action regarding abuse of Oxycontin. Blumenthal noted that while Purdue seemed sincere, there was little action being taken beyond "cosmetic and symbolic steps." After Purdue announced plans to reformulate the drug, Blumenthal noted that this would take time, and that "Purdue Pharma has a moral, if not legal, obligation to take effective steps now that address addiction and abuse even as it works to reformulate the drug." The company has since implemented a comprehensive program designed to assist in detection of the illegal trafficking and abuse of prescription drugs without compromising patient access to proper pain control. In May 2007, the company pleaded guilty to misleading the public about Oxycontin's risk of addiction. Purdue Pharma, its president, top lawyer, and former chief medical officer agreed to pay $634.5 million in fines for claiming the drug was less addictive and less subject to abuse than other pain medications.
Purdue Pharma Purdue Pharma L.P., is privately-held pharmaceutical company founded by physicians. It is located in Stamford, Connecticut. Purdue is best known for painkillers, but they have also branched into other areas such as oncology and nutraceuticals. In its early years, Purdue was known for its antiseptic product, Betadine Solution, and its Senokot laxatives. Today, it is best known for its products for the treatment of pain; MS Contin Tablets and OxyContin Tablets. # Prescription drug abuse Purdue has also been involved in measures against prescription drug abuse, particularly of its well known Oxycontin brand. In 2001, Connecticut Attorney General Richard Blumenthal issued a statement urging Purdue to take action regarding abuse of Oxycontin. Blumenthal noted that while Purdue seemed sincere, there was little action being taken beyond "cosmetic and symbolic steps."[1] After Purdue announced plans to reformulate the drug, Blumenthal noted that this would take time, and that "Purdue Pharma has a moral, if not legal, obligation to take effective steps now that address addiction and abuse even as it works to reformulate the drug."[2] The company has since implemented a comprehensive program designed to assist in detection of the illegal trafficking and abuse of prescription drugs without compromising patient access to proper pain control. In May 2007, the company pleaded guilty to misleading the public about Oxycontin's risk of addiction. Purdue Pharma, its president, top lawyer, and former chief medical officer agreed to pay $634.5 million in fines for claiming the drug was less addictive and less subject to abuse than other pain medications.[3]
https://www.wikidoc.org/index.php/Purdue_Pharma
78d1ac90527c9ab0ef019620b9a295a6dacae617
wikidoc
Puumala virus
Puumala virus - Puumala virus is a species of hantavirus, and causes nephropathia epidemica. It is common in northern Europe and Russia. - The bank vole acts as a reservoir for the virus, and nephropathia epidemica therefore peaks at the same time the population of these voles, typically every 3 to 4 years. Farmers are often exposed to the droppings of these animals and are therefore more commonly infected. - The virus was found and named in 1980 by two Finnish researchers Markus Brummer-Korvenkontio and Antti Vaheri. - Puumala is a municipality in Finland. # Clinical Manifestations Nephropathia epidemica is a virus-infection caused by the Puumala virus. The incubation period is three weeks. It has a sudden onset with fever, headache, backpain and gastrointestinal symptoms, but sometimes worse symptoms such as internal hemorrhaging and it can even lead to death. It is more mild than the haemorrhagic fever with renal syndrome that can be observed in other parts of the world. 80% of infected individuals are asymptomatic or develop only mild symptoms, and the disease does not spread from human to human. The bank vole is the reservoir for the virus, which is contracted from aerosolized droppings. This infection is known as myyräkuume in Finland (Mole fever) as the virus can spread to humans from dust that the virus has spread to from moles and mice. In Sweden it is known as sorkfeber (Vole fever). In Norway it is called "musepest" (mouse plague). fi:Puumala-virus
Puumala virus - Puumala virus is a species of hantavirus, and causes nephropathia epidemica. It is common in northern Europe and Russia. - The bank vole acts as a reservoir for the virus, and nephropathia epidemica therefore peaks at the same time the population of these voles, typically every 3 to 4 years. Farmers are often exposed to the droppings of these animals and are therefore more commonly infected. - The virus was found and named in 1980 by two Finnish researchers Markus Brummer-Korvenkontio and Antti Vaheri. - Puumala is a municipality in Finland. # Clinical Manifestations Nephropathia epidemica is a virus-infection caused by the Puumala virus. The incubation period is three weeks. It has a sudden onset with fever, headache, backpain and gastrointestinal symptoms, but sometimes worse symptoms such as internal hemorrhaging and it can even lead to death. It is more mild than the haemorrhagic fever with renal syndrome that can be observed in other parts of the world. 80% of infected individuals are asymptomatic or develop only mild symptoms, and the disease does not spread from human to human. The bank vole is the reservoir for the virus, which is contracted from aerosolized droppings. This infection is known as myyräkuume in Finland (Mole fever) as the virus can spread to humans from dust that the virus has spread to from moles and mice. In Sweden it is known as sorkfeber (Vole fever). In Norway it is called "musepest" (mouse plague). Template:Medical-stub Template:Virus-stub fi:Puumala-virus
https://www.wikidoc.org/index.php/Puumala_virus
e13a6ffa3bb14074e6566fc35cdbca2d1f756334
wikidoc
Quadrate bone
Quadrate bone The quadrate bone is part of the skull in most tetrapods, including amphibians, sauropsids ("reptiles"), birds and early synapsids. In these animals it connects to the quadratojugal and squamosal in the skull, and forms part of the jaw joint (the other part is the articular bone at the rear end of the lower jaw). In snakes, the quadrate bone has become elongated and very mobile, and contributes greatly to their ability to swallow very large prey items. In mammals the articular and quadrate bones have migrated to the middle ear and are known as the malleus and incus. In fact paleontologists regard this modification as the defining characteristic of mammals.
Quadrate bone The quadrate bone is part of the skull in most tetrapods, including amphibians, sauropsids ("reptiles"), birds and early synapsids. In these animals it connects to the quadratojugal and squamosal in the skull, and forms part of the jaw joint (the other part is the articular bone at the rear end of the lower jaw). In snakes, the quadrate bone has become elongated and very mobile, and contributes greatly to their ability to swallow very large prey items. In mammals the articular and quadrate bones have migrated to the middle ear and are known as the malleus and incus. In fact paleontologists regard this modification as the defining characteristic of mammals.[1]
https://www.wikidoc.org/index.php/Quadrate_bone
3681af172323a097f0c1cd06abb9a4d77f4cfe6a
wikidoc
Quantum state
Quantum state In quantum physics, a quantum state is a mathematical object that fully describes a quantum system. One typically imagines some experimental apparatus and procedure which "prepares" this quantum state; the mathematical object then reflects the setup of the apparatus. Quantum states can be statistically mixed, corresponding to an experiment involving a random change of the parameters. States obtained in this way are called mixed states, as opposed to pure states which cannot be described as a mixture of others. When performing a certain measurement on a quantum state, the result is in general described by a probability distribution, and the form that this distribution takes is completely determined by the quantum state and the observable describing the measurement. However, unlike in classical mechanics, the result of a measurement on even a pure quantum state is only determined probabilistically. This reflects a core difference between classical and quantum physics. Mathematically, a pure quantum state is typically represented by a vector in a Hilbert space. In physics, bra-ket notation is often used to denote such vectors. Linear combinations (superpositions) of vectors can describe interference phenomena. Mixed quantum states are described by density matrices. In a more general mathematical context, quantum states can be understood as positive normalized linear functionals on a C- algebra; see GNS construction. # Conceptual description ## The state of a physical system The state of a physical system is a complete description of the parameters of the experiment. To understand this rather abstract notion, it is useful to first explore it in an example from classical mechanics. Consider an experiment with a (non-quantum) particle of mass m=1 which moves freely, and without friction, in one spatial direction. We start the experiment at time t=0 by pushing the particle with some speed into some direction. Doing this, we determine the initial position q and the initial momentum p -f the particle. These initial conditions are what characterizes the state \sigma of the system, formally denoted as \sigma = (p,q) . We say that we prepare the state of the system by fixing its initial conditions. At a later time t>0, we conduct measurements on the particle. The measurements we can perform on this simple system are essentially its position Q(t) at time t, its momentum P(t), and combinations of these. Here P(t) and Q(t) refer to the measurable quantities (observables) -f the system as such, not the specific results they produce in a certain run of the experiment. However, knowing the state \sigma of the system, we can compute the value of the observables in the specific state, i.e., the results that our measurements will produce, depending on p and q. We denote these values as \langle P(t) \rangle _\sigma and \langle Q(t) \rangle _\sigma. In our simple example, it is well known that the particle moves with constant velocity; therefore, \langle P(t) \rangle _\sigma = p, \quad \langle Q(t) \rangle _\sigma = pt+q. Now suppose that we start the particle with a random initial position and momentum. (For argument's sake, we may suppose that the particle is pushed away at t=0 by some apparatus which is controlled by a random number generator.) The state \sigma of the system is now not described by two numbers p and q, but rather by two probability distributions. The observables P(t) and Q(t) will produce random results now; they become random variables, and their values in a single measurement cannot be predicted. However, if we repeat the experiment sufficiently often, always preparing the same state \sigma, we can predict the expectation value -f the observables (their statistical mean) in the state \sigma. The expectation value of P(t) is again denoted by \langle P(t) \rangle _\sigma, etc. These "statistical" states of the system are called mixed states, as opposed to the pure states \sigma=(p,q) discussed further above. Abstractly, mixed states arise as a statistical mixture of pure states. ## Quantum states In quantum systems, the conceptual distinction between observables and states persists just as described above. The state \sigma of the system is fixed by the way the physicist prepares his experiment (e.g., how he adjusts his particle source). As above, there is a distinction between pure states and mixed states, the latter being statistical mixtures of the former. However, some important differences arise in comparison with classical mechanics. In quantum theory, even pure states show statistical behaviour. Regardless of how carefully we prepare the state \rho of the system, measurement results are not repeatable in general, and we must understand the expectation value \langle A \rangle _\sigma of an observable A as a statistical mean. It is this mean that is predicted by physical theories. For any fixed observable A, it is generally possible to prepare a pure state \sigma_A such that A has a fixed value in this state: If we repeat the experiment several times, each time measuring A, we will always obtain the same measurement result, without any random behaviour. Such pure states \sigma_A are called eigenstates of A. However, it is impossible to prepare a simultaneous eigenstate for all observables. For example, we cannot prepare a state such that both the position measurement Q(t) and the momentum measurement P(t) (at the same time t) produce "sharp" results; at least one of them will exhibit random behaviour. This is the content of the Heisenberg uncertainty relation. Moreover, in contrast to classical mechanics, it is unavoidable that performing a measurement on the system generally changes its state. More precisely: After measuring an observable A, the system will be in an eigenstate of A; thus the state has changed, unless the system was already in that eigenstate. This expresses a kind of logical consistency: If we measure A twice in the same run of the experiment, the measurements being directly consecutive in time, then they will produce the same results. This has some strange consequences however: Consider two observables, A and B, where A corresponds to a measurement earlier in time than B. Suppose that the system is in an eigenstate of B. If we measure only B, we will not notice statistical behaviour. If we measure first A and then B in the same run of the experiment, the system will transfer to an eigenstate of A after the first measurement, and we will generally notice that the results of B are statistical. Thus, quantum mechanical measurements influence one another, and it is important in which order they are performed. Another feature of quantum states becomes relevant if we consider a physical system that consists of multiple subsystems; for example, an experiment with two particles rather than one. Quantum physics allows for certain states, called entangled states, that show certain statistical correlations between measurements on the two particles which cannot be explained by classical theory. For details, see entanglement. These entangled states lead to experimentally testable properties (Bell's theorem) that allow to distinguish between quantum theory and alternative classical (non-quantum) models. ## Schrödinger picture vs. Heisenberg picture In the discussion above, we have taken the observables P(t), Q(t) to be dependent on time, while the state \sigma was fixed once at the beginning of the experiment. This approach is called the Heisenberg picture. One can, equivalently, treat the observables as fixed, while the state of the system depends on time; that is known as the Schrödinger picture. Conceptually (and mathematically), both approaches are equivalent; choosing one of them is a matter of convention. Both viewpoints are used in quantum theory. While non-relativistic quantum mechanics is usually formulated in terms of the Schrödinger picture, the Heisenberg picture is often preferred in a relativistic context, that is, for quantum field theory. # Formalism in quantum physics ## Pure states as rays in a Hilbert space Quantum physics is most commonly formulated in terms of linear algebra, as follows. Any given system is identified with some Hilbert space, such that each vector in the Hilbert space (apart from the origin) corresponds to a pure quantum state. In addition, two vectors that differ only by a nonzero complex scalar correspond to the same state (in other words, each pure state is a ray in the Hilbert space). Alternatively, many authors choose to only consider normalized vectors (vectors of norm 1) as corresponding to quantum states. In this case, the set of all pure states corresponds to the unit sphere of a Hilbert space, with the proviso that two normalized vectors correspond to the same state if they differ only by a complex scalar of absolute value 1 (called a phase factor). ## Bra-ket notation Calculations in quantum mechanics make frequent use of linear operators, inner products, dual spaces, and Hermitian conjugation. In order to make such calculations more straightforward, and to obviate the need (in some contexts) to fully understand the underlying linear algebra, Paul Dirac invented a notation to describe quantum states, known as bra-ket notation. Although the details of this are beyond the scope of this article (see the article Bra-ket notation), some consequences of this are: - The variable name used to denote a vector (which corresponds to a pure quantum state) is chosen to be of the form |\psi\rangle (where the "\psi" can be replaced by any other symbols, letters, numbers, or even words). This can be contrasted with the usual mathematical notation, where vectors are usually bold, lower-case letters, or letters with arrows on top. - Instead of vector, the term ket is used synonymously. - Each ket |\psi\rangle is uniquely associated with a so-called bra, denoted \langle\psi|, which is also said to correspond to the same physical quantum state. Technically, the bra is an element of the dual space, and related to the ket by the Riesz representation theorem. - Inner products (also called brackets) are written so as to look like a bra and ket next to each other: \lang \psi_1|\psi_2\rang. (Note that the phrase "bra-ket" is supposed to resemble "bracket".) ## Spin, Many-body states It is important to note that in quantum mechanics besides, e.g., the usual position variable \mathbf r, a discrete variable m exists, corresponding to the value of the z-component of the spin vector. This is some kind of intrinsic angular momentum, which does, however, not appear at all in classical mechanics and is in fact a legacy from Dirac's relativistic generalization of the theory. As a consequence, the quantum state of a system of N particles is described by a function with four variables per particle, e.g. |\psi (\mathbf r_1,m_1;\dots ;\mathbf r_N,m_N)\rangle. Here, the variables mν assume values from the set {-S_\nu, -S_\nu +1, ..., +S_\nu -1,+S_\nu}, where S_\nu (in units of Planck's reduced constant \hbar), is either a non-negative integer (0,1,2...; bosons), or semi-integer (1/2,3/2,5/2,...; fermions). Moreover, in the case of identical particles, the above N-particle function must either be symmetrized (in the bosonic case) or anti-symmetrized (in the fermionic case) w.r.t. the particle numbers. Electrons are fermions with S=1/2, photons (quanta of light) are bosons with S=1. Apart from the symmetrization or anti-symmetrization, N-particle states can thus simply be obtained by tensor products of one-particle states, to which we return herewith. ## Basis states of one-particle systems As with any vector space, if a basis is chosen for the Hilbert space of a system, then any ket can be expanded as a linear combination of those basis elements. Symbolically, given basis kets |k_i\rang, any ket |\psi\rang can be written where ci are complex numbers. In physical terms, this is described by saying that |\psi\rang has been expressed as a quantum superposition of the states |k_i\rang. If the basis kets are chosen to be orthonormal (as is often the case), then c_i=\lang k_i|\psi\rang. One property worth noting is that the normalized states |\psi\rang are characterized by Expansions of this sort play an important role in measurement in quantum mechanics. In particular, If the |k_i\rang are eigenstates (with eigenvalues k_i) of an observable, and that observable is measured on the normalized state |\psi\rang, then the probability that the result of the measurement is ki is |ci|2. (The normalization condition above mandates that the total sum of probabilities is equal to one.) A particularly important example is the position basis, which is the basis consisting of eigenstates of the observable which corresponds to measuring position. If these eigenstates are nondegenerate (for example, if the system is a single, spinless particle), then any ket |\psi\rang is associated with a complex-valued function of three-dimensional space: This function is called the wavefunction corresponding to |\psi\rang. ## Superposition of pure states One aspect of quantum states, mentioned above, is that superpositions of them can be formed. If |\alpha\rangle and |\beta\rangle are two kets corresponding to quantum states, the ket is a different quantum state (possibly not normalized). Note that which quantum state it is depends on both the amplitudes and phases (arguments) of c_\alpha and c_\beta. In other words, for example, even though |\psi\rang and e^{i\theta}|\psi\rang (for real θ) correspond to the same physical quantum state, they are not interchangeable, since for example |\phi\rang+|\psi\rang and |\phi\rang+e^{i\theta}|\psi\rang do not (in general) correspond to the same physical state. However, |\phi\rang+|\psi\rang and e^{i\theta}(|\phi\rang+|\psi\rang) do correspond to the same physical state. This is sometimes described by saying that "global" phase factors are unphysical, but "relative" phase factors are physical and important. One example of a quantum interference phenomenon that arises from superposition is the double-slit experiment. The photon state is a superposition of two different states, one of which corresponds to the photon having passed through the left slit, and the other corresponding to passage through the right slit. The relative phase of those two states has a value which depends on the distance from each of the two slits. Depending on what that phase is, the interference is constructive at some locations and destructive in others, creating the interference pattern. Another example of the importance of relative phase in quantum superposition is Rabi oscillations, where the relative phase of two states varies in time due to the Schrödinger equation. The resulting superposition ends up oscillating back and forth between two different states. ## Mixed states A pure quantum state is a state which can be described by a single ket vector, as described above. A mixed quantum state is a statistical ensemble of pure states (see quantum statistical mechanics). A mixed state cannot be described as a ket vector. Instead, it is described by its associated density matrix (or density operator), usually denoted \rho. Note that density matrices can describe both mixed and pure states, treating them on the same footing. The density matrix is defined as where p_s is the fraction of the ensemble in each pure state |\psi_s\rangle. Here, one typically uses a one-particle formalism to describe the average behaviour of a N-particle system. A simple criterion for checking whether a density matrix is describing a pure or mixed state is that the trace of ρ2 is equal to 1 if the state is pure, and less than 1 if the state is mixed. Another, equivalent, criterion is that the von Neumann entropy is 0 for a pure state, and strictly positive for a mixed state. The rules for measurement in quantum mechanics are particularly simple to state in terms of density matrices. For example, the ensemble average (expectation value) of a measurement corresponding to an observable A is given by where |\alpha_i\rangle, \; a_i are eigenkets and eigenvalues, respectively, for the operator A, and tr denotes trace. It is important to note that two types of averaging are occurring, one being a quantum average over the basis kets |\psi_s\rangle of the pure states, and the other being a statistical average with the probabilities p_s of those states. W.r.t. these different types of averaging, i.e. to distinguish pure and/or mixed states, one often uses the expressions 'coherent' and/or 'incoherent superposition' of quantum states. # Mathematical formulation For a mathematical discussion on states as functionals, see Gelfand-Naimark-Segal construction. There, the same objects are described in a Calgebraic context. # Notes - ↑ If you are not familiar with the concept of momentum, think of it as being the velocity of the particle. That is fully justified in this context. - ↑ To avoid misunderstandings: Here we mean that Q(t) and P(t) are measured in the same state, but not in the same run of the experiment.) - ↑ For concreteness' sake, you may suppose that A=Q(t_1) and B=P(t_2) in the above example, with t_2>t_1>0.
Quantum state Template:Quantum mechanics In quantum physics, a quantum state is a mathematical object that fully describes a quantum system. One typically imagines some experimental apparatus and procedure which "prepares" this quantum state; the mathematical object then reflects the setup of the apparatus. Quantum states can be statistically mixed, corresponding to an experiment involving a random change of the parameters. States obtained in this way are called mixed states, as opposed to pure states which cannot be described as a mixture of others. When performing a certain measurement on a quantum state, the result is in general described by a probability distribution, and the form that this distribution takes is completely determined by the quantum state and the observable describing the measurement. However, unlike in classical mechanics, the result of a measurement on even a pure quantum state is only determined probabilistically. This reflects a core difference between classical and quantum physics. Mathematically, a pure quantum state is typically represented by a vector in a Hilbert space. In physics, bra-ket notation is often used to denote such vectors. Linear combinations (superpositions) of vectors can describe interference phenomena. Mixed quantum states are described by density matrices. In a more general mathematical context, quantum states can be understood as positive normalized linear functionals on a C* algebra; see GNS construction. # Conceptual description ## The state of a physical system The state of a physical system is a complete description of the parameters of the experiment. To understand this rather abstract notion, it is useful to first explore it in an example from classical mechanics. Consider an experiment with a (non-quantum) particle of mass <math>m=1</math> which moves freely, and without friction, in one spatial direction. We start the experiment at time <math>t=0</math> by pushing the particle with some speed into some direction. Doing this, we determine the initial position <math>q</math> and the initial momentum[1] <math>p</math> of the particle. These initial conditions are what characterizes the state <math>\sigma</math> of the system, formally denoted as <math> \sigma = (p,q) </math>. We say that we prepare the state of the system by fixing its initial conditions. At a later time <math>t>0</math>, we conduct measurements on the particle. The measurements we can perform on this simple system are essentially its position <math>Q(t)</math> at time <math>t</math>, its momentum <math>P(t)</math>, and combinations of these. Here <math>P(t)</math> and <math>Q(t)</math> refer to the measurable quantities (observables) of the system as such, not the specific results they produce in a certain run of the experiment. However, knowing the state <math>\sigma</math> of the system, we can compute the value of the observables in the specific state, i.e., the results that our measurements will produce, depending on <math>p</math> and <math>q</math>. We denote these values as <math>\langle P(t) \rangle _\sigma</math> and <math>\langle Q(t) \rangle _\sigma</math>. In our simple example, it is well known that the particle moves with constant velocity; therefore, <math> \langle P(t) \rangle _\sigma = p, \quad \langle Q(t) \rangle _\sigma = pt+q. </math> Now suppose that we start the particle with a random initial position and momentum. (For argument's sake, we may suppose that the particle is pushed away at <math>t=0</math> by some apparatus which is controlled by a random number generator.) The state <math>\sigma</math> of the system is now not described by two numbers <math>p</math> and <math>q</math>, but rather by two probability distributions. The observables <math>P(t)</math> and <math>Q(t)</math> will produce random results now; they become random variables, and their values in a single measurement cannot be predicted. However, if we repeat the experiment sufficiently often, always preparing the same state <math>\sigma</math>, we can predict the expectation value of the observables (their statistical mean) in the state <math>\sigma</math>. The expectation value of <math>P(t)</math> is again denoted by <math>\langle P(t) \rangle _\sigma</math>, etc. These "statistical" states of the system are called mixed states, as opposed to the pure states <math>\sigma=(p,q)</math> discussed further above. Abstractly, mixed states arise as a statistical mixture of pure states. ## Quantum states In quantum systems, the conceptual distinction between observables and states persists just as described above. The state <math>\sigma</math> of the system is fixed by the way the physicist prepares his experiment (e.g., how he adjusts his particle source). As above, there is a distinction between pure states and mixed states, the latter being statistical mixtures of the former. However, some important differences arise in comparison with classical mechanics. In quantum theory, even pure states show statistical behaviour. Regardless of how carefully we prepare the state <math>\rho</math> of the system, measurement results are not repeatable in general, and we must understand the expectation value <math>\langle A \rangle _\sigma</math> of an observable <math>A</math> as a statistical mean. It is this mean that is predicted by physical theories. For any fixed observable <math>A</math>, it is generally possible to prepare a pure state <math>\sigma_A</math> such that <math>A</math> has a fixed value in this state: If we repeat the experiment several times, each time measuring <math>A</math>, we will always obtain the same measurement result, without any random behaviour. Such pure states <math>\sigma_A</math> are called eigenstates of <math>A</math>. However, it is impossible to prepare a simultaneous eigenstate for all observables. For example, we cannot prepare a state such that both the position measurement <math>Q(t)</math> and the momentum measurement <math>P(t)</math> (at the same time <math>t</math>) produce "sharp" results; at least one of them will exhibit random behaviour.[2] This is the content of the Heisenberg uncertainty relation. Moreover, in contrast to classical mechanics, it is unavoidable that performing a measurement on the system generally changes its state. More precisely: After measuring an observable <math>A</math>, the system will be in an eigenstate of <math>A</math>; thus the state has changed, unless the system was already in that eigenstate. This expresses a kind of logical consistency: If we measure <math>A</math> twice in the same run of the experiment, the measurements being directly consecutive in time, then they will produce the same results. This has some strange consequences however: Consider two observables, <math>A</math> and <math>B</math>, where <math>A</math> corresponds to a measurement earlier in time than <math>B</math>.[3] Suppose that the system is in an eigenstate of <math>B</math>. If we measure only <math>B</math>, we will not notice statistical behaviour. If we measure first <math>A</math> and then <math>B</math> in the same run of the experiment, the system will transfer to an eigenstate of <math>A</math> after the first measurement, and we will generally notice that the results of <math>B</math> are statistical. Thus, quantum mechanical measurements influence one another, and it is important in which order they are performed. Another feature of quantum states becomes relevant if we consider a physical system that consists of multiple subsystems; for example, an experiment with two particles rather than one. Quantum physics allows for certain states, called entangled states, that show certain statistical correlations between measurements on the two particles which cannot be explained by classical theory. For details, see entanglement. These entangled states lead to experimentally testable properties (Bell's theorem) that allow to distinguish between quantum theory and alternative classical (non-quantum) models. ## Schrödinger picture vs. Heisenberg picture In the discussion above, we have taken the observables <math>P(t)</math>, <math>Q(t)</math> to be dependent on time, while the state <math>\sigma</math> was fixed once at the beginning of the experiment. This approach is called the Heisenberg picture. One can, equivalently, treat the observables as fixed, while the state of the system depends on time; that is known as the Schrödinger picture. Conceptually (and mathematically), both approaches are equivalent; choosing one of them is a matter of convention. Both viewpoints are used in quantum theory. While non-relativistic quantum mechanics is usually formulated in terms of the Schrödinger picture, the Heisenberg picture is often preferred in a relativistic context, that is, for quantum field theory. # Formalism in quantum physics ## Pure states as rays in a Hilbert space Quantum physics is most commonly formulated in terms of linear algebra, as follows. Any given system is identified with some Hilbert space, such that each vector in the Hilbert space (apart from the origin) corresponds to a pure quantum state. In addition, two vectors that differ only by a nonzero complex scalar correspond to the same state (in other words, each pure state is a ray in the Hilbert space). Alternatively, many authors choose to only consider normalized vectors (vectors of norm 1) as corresponding to quantum states. In this case, the set of all pure states corresponds to the unit sphere of a Hilbert space, with the proviso that two normalized vectors correspond to the same state if they differ only by a complex scalar of absolute value 1 (called a phase factor). ## Bra-ket notation Calculations in quantum mechanics make frequent use of linear operators, inner products, dual spaces, and Hermitian conjugation. In order to make such calculations more straightforward, and to obviate the need (in some contexts) to fully understand the underlying linear algebra, Paul Dirac invented a notation to describe quantum states, known as bra-ket notation. Although the details of this are beyond the scope of this article (see the article Bra-ket notation), some consequences of this are: - The variable name used to denote a vector (which corresponds to a pure quantum state) is chosen to be of the form <math>|\psi\rangle</math> (where the "<math>\psi</math>" can be replaced by any other symbols, letters, numbers, or even words). This can be contrasted with the usual mathematical notation, where vectors are usually bold, lower-case letters, or letters with arrows on top. - Instead of vector, the term ket is used synonymously. - Each ket <math>|\psi\rangle</math> is uniquely associated with a so-called bra, denoted <math>\langle\psi|</math>, which is also said to correspond to the same physical quantum state. Technically, the bra is an element of the dual space, and related to the ket by the Riesz representation theorem. - Inner products (also called brackets) are written so as to look like a bra and ket next to each other: <math>\lang \psi_1|\psi_2\rang</math>. (Note that the phrase "bra-ket" is supposed to resemble "bracket".) ## Spin, Many-body states It is important to note that in quantum mechanics besides, e.g., the usual position variable <math>\mathbf r</math>, a discrete variable m exists, corresponding to the value of the z-component of the spin vector. This is some kind of intrinsic angular momentum, which does, however, not appear at all in classical mechanics and is in fact a legacy from Dirac's relativistic generalization of the theory. As a consequence, the quantum state of a system of N particles is described by a function with four variables per particle, e.g. <math>|\psi (\mathbf r_1,m_1;\dots ;\mathbf r_N,m_N)\rangle</math>. Here, the variables mν assume values from the set {<math>-S_\nu, -S_\nu +1, ..., +S_\nu -1,+S_\nu</math>}, where <math>S_\nu</math> (in units of Planck's reduced constant <math>\hbar</math>), is either a non-negative integer (0,1,2...; bosons), or semi-integer (1/2,3/2,5/2,...; fermions). Moreover, in the case of identical particles, the above N-particle function must either be symmetrized (in the bosonic case) or anti-symmetrized (in the fermionic case) w.r.t. the particle numbers. Electrons are fermions with S=1/2, photons (quanta of light) are bosons with S=1. Apart from the symmetrization or anti-symmetrization, N-particle states can thus simply be obtained by tensor products of one-particle states, to which we return herewith. ## Basis states of one-particle systems As with any vector space, if a basis is chosen for the Hilbert space of a system, then any ket can be expanded as a linear combination of those basis elements. Symbolically, given basis kets <math>|k_i\rang</math>, any ket <math>|\psi\rang</math> can be written where ci are complex numbers. In physical terms, this is described by saying that <math>|\psi\rang</math> has been expressed as a quantum superposition of the states <math>|k_i\rang</math>. If the basis kets are chosen to be orthonormal (as is often the case), then <math>c_i=\lang k_i|\psi\rang</math>. One property worth noting is that the normalized states <math>|\psi\rang</math> are characterized by Expansions of this sort play an important role in measurement in quantum mechanics. In particular, If the <math>|k_i\rang</math> are eigenstates (with eigenvalues <math>k_i</math>) of an observable, and that observable is measured on the normalized state <math>|\psi\rang</math>, then the probability that the result of the measurement is ki is |ci|2. (The normalization condition above mandates that the total sum of probabilities is equal to one.) A particularly important example is the position basis, which is the basis consisting of eigenstates of the observable which corresponds to measuring position. If these eigenstates are nondegenerate (for example, if the system is a single, spinless particle), then any ket <math>|\psi\rang</math> is associated with a complex-valued function of three-dimensional space: This function is called the wavefunction corresponding to <math>|\psi\rang</math>. ## Superposition of pure states One aspect of quantum states, mentioned above, is that superpositions of them can be formed. If <math>|\alpha\rangle</math> and <math>|\beta\rangle</math> are two kets corresponding to quantum states, the ket is a different quantum state (possibly not normalized). Note that which quantum state it is depends on both the amplitudes and phases (arguments) of <math>c_\alpha</math> and <math>c_\beta</math>. In other words, for example, even though <math>|\psi\rang</math> and <math>e^{i\theta}|\psi\rang</math> (for real θ) correspond to the same physical quantum state, they are not interchangeable, since for example <math>|\phi\rang+|\psi\rang</math> and <math>|\phi\rang+e^{i\theta}|\psi\rang</math> do not (in general) correspond to the same physical state. However, <math>|\phi\rang+|\psi\rang</math> and <math>e^{i\theta}(|\phi\rang+|\psi\rang)</math> do correspond to the same physical state. This is sometimes described by saying that "global" phase factors are unphysical, but "relative" phase factors are physical and important. One example of a quantum interference phenomenon that arises from superposition is the double-slit experiment. The photon state is a superposition of two different states, one of which corresponds to the photon having passed through the left slit, and the other corresponding to passage through the right slit. The relative phase of those two states has a value which depends on the distance from each of the two slits. Depending on what that phase is, the interference is constructive at some locations and destructive in others, creating the interference pattern. Another example of the importance of relative phase in quantum superposition is Rabi oscillations, where the relative phase of two states varies in time due to the Schrödinger equation. The resulting superposition ends up oscillating back and forth between two different states. ## Mixed states A pure quantum state is a state which can be described by a single ket vector, as described above. A mixed quantum state is a statistical ensemble of pure states (see quantum statistical mechanics). A mixed state cannot be described as a ket vector. Instead, it is described by its associated density matrix (or density operator), usually denoted <math>\rho</math>. Note that density matrices can describe both mixed and pure states, treating them on the same footing. The density matrix is defined as where <math>p_s</math> is the fraction of the ensemble in each pure state <math>|\psi_s\rangle.</math> Here, one typically uses a one-particle formalism to describe the average behaviour of a N-particle system. A simple criterion for checking whether a density matrix is describing a pure or mixed state is that the trace of ρ2 is equal to 1 if the state is pure, and less than 1 if the state is mixed. Another, equivalent, criterion is that the von Neumann entropy is 0 for a pure state, and strictly positive for a mixed state. The rules for measurement in quantum mechanics are particularly simple to state in terms of density matrices. For example, the ensemble average (expectation value) of a measurement corresponding to an observable <math>A</math> is given by where <math>|\alpha_i\rangle, \; a_i</math> are eigenkets and eigenvalues, respectively, for the operator <math>A</math>, and tr denotes trace. It is important to note that two types of averaging are occurring, one being a quantum average over the basis kets <math>|\psi_s\rangle</math> of the pure states, and the other being a statistical average with the probabilities <math>p_s</math> of those states. W.r.t. these different types of averaging, i.e. to distinguish pure and/or mixed states, one often uses the expressions 'coherent' and/or 'incoherent superposition' of quantum states. # Mathematical formulation For a mathematical discussion on states as functionals, see Gelfand-Naimark-Segal construction. There, the same objects are described in a C*-algebraic context. # Notes - ↑ If you are not familiar with the concept of momentum, think of it as being the velocity of the particle. That is fully justified in this context. - ↑ To avoid misunderstandings: Here we mean that <math>Q(t)</math> and <math>P(t)</math> are measured in the same state, but not in the same run of the experiment.) - ↑ For concreteness' sake, you may suppose that <math>A=Q(t_1)</math> and <math>B=P(t_2)</math> in the above example, with <math>t_2>t_1>0</math>.
https://www.wikidoc.org/index.php/Quantum_state
637fbbabdbddf69826b77e8a4c69d6ca7a46adc0
wikidoc
Quantum yield
Quantum yield # Overview The quantum yield of a radiation-induced process is the number of times that a defined event occurs per photon absorbed by the system. Thus, the quantum yield is a measure of the efficiency with which absorbed light produces some effect. For example, in a chemical photodegradation process, when a molecule falls apart after absorbing a light quantum, the quantum yield is the number of destroyed molecules divided by the number of photons absorbed by the system. Since not all photons are absorbed productively, the typical quantum yield will be less than 1. Quantum yields greater than 1 are possible for photo-induced or radiation-induced chain reactions, in which a single photon may trigger a long chain of transformations. One example is the reaction of hydrogen with chlorine, in which a few hundred molecules of hydrochloric acid are typically formed per quantum of blue light absorbed. In optical spectroscopy, the quantum yield is the probability that a given quantum state is formed from the system initially prepared in some other quantum state. For example, a singlet to triplet transition quantum yield is the fraction of molecules that, after being photoexcited into a singlet state, cross over to the triplet state. The fluorescence quantum yield is defined as the ratio of the number of photons emitted to the number of photons absorbed.
Quantum yield # Overview The quantum yield of a radiation-induced process is the number of times that a defined event occurs per photon absorbed by the system. Thus, the quantum yield is a measure of the efficiency with which absorbed light produces some effect. For example, in a chemical photodegradation process, when a molecule falls apart after absorbing a light quantum, the quantum yield is the number of destroyed molecules divided by the number of photons absorbed by the system. Since not all photons are absorbed productively, the typical quantum yield will be less than 1. Quantum yields greater than 1 are possible for photo-induced or radiation-induced chain reactions, in which a single photon may trigger a long chain of transformations. One example is the reaction of hydrogen with chlorine, in which a few hundred molecules of hydrochloric acid are typically formed per quantum of blue light absorbed. In optical spectroscopy, the quantum yield is the probability that a given quantum state is formed from the system initially prepared in some other quantum state. For example, a singlet to triplet transition quantum yield is the fraction of molecules that, after being photoexcited into a singlet state, cross over to the triplet state. The fluorescence quantum yield is defined as the ratio of the number of photons emitted to the number of photons absorbed.
https://www.wikidoc.org/index.php/Quantum_yield
2ce758901d8d9ca1a47f3f5750060be2e292f190
wikidoc
Quaternium-15
Quaternium-15 Quaternium-15 is a preservative found in many cosmetics and industrial substances that releases formaldehyde. It can be found in numerous sources, including but not limited to: mascara, eyeliner, moisturizer, lotion, shampoo, conditioner, nail polish, personal lubricants, soaps, body wash, baby lotion or shampoo, facial cleanser, tanning oil, self-tanning cream, sunscreen, powder, shaving products, ointments, personal wipes or cleansers, wipes, paper, inks, paints, polishes, waxes and industrial lubricants. It can cause contact dermatitis, a symptom of an allergic reaction, especially in those with sensitive skin, on an infant's skin, or on sensitive areas such as the genitals. Its chemical formula is C9H16Cl2N4. It can be found under a variety of names, including: Dowicil 75; Dowicil 100; Dowco 184; Dowicide Q; 1-(3-Chloroallyl)-3,5,7-triaza-1-azoniaadamantane chloride; N-(3-chloroallyl) hexaminium chloride; hexamethylenetetramine chloroallyl chloride; 3,5,7-Triaza-1-azoniaadamantane; 1-(3-chloroallyl)-chloride. # Formaldehyde-releasing Other formaldehyde-releasing preservatives similar to quaternium-15 include: imidazolidinyl urea (Germall®), diazolidinyl urea (Germall II®), DMDM hydantoin (Glydant®), bromonitropropane diol (Bronopol™), tris(hydroxymethyl) nitromethane (Tris Nitro®), and sodium hydroxymethylglycinate. # Safety concerns Quaternium-15 is an allergen, and can cause contact dermatitis in susceptible individuals. Many of those with an allergy to quaternium-15 are also allergic to formaldehyde. Allergic sensitivity to quaternium-15 can be detected using a patch test.
Quaternium-15 Quaternium-15 is a preservative found in many cosmetics and industrial substances that releases formaldehyde. It can be found in numerous sources, including but not limited to: mascara, eyeliner, moisturizer, lotion, shampoo, conditioner, nail polish, personal lubricants, soaps, body wash, baby lotion or shampoo, facial cleanser, tanning oil, self-tanning cream, sunscreen, powder, shaving products, ointments, personal wipes or cleansers, wipes, paper, inks, paints, polishes, waxes and industrial lubricants. It can cause contact dermatitis, a symptom of an allergic reaction, especially in those with sensitive skin, on an infant's skin, or on sensitive areas such as the genitals. Its chemical formula is C9H16Cl2N4. It can be found under a variety of names, including: Dowicil 75; Dowicil 100; Dowco 184; Dowicide Q; 1-(3-Chloroallyl)-3,5,7-triaza-1-azoniaadamantane chloride; N-(3-chloroallyl) hexaminium chloride; hexamethylenetetramine chloroallyl chloride; 3,5,7-Triaza-1-azoniaadamantane; 1-(3-chloroallyl)-chloride. Template:SMILESCAS # Formaldehyde-releasing Other formaldehyde-releasing preservatives similar to quaternium-15 include: imidazolidinyl urea (Germall®), diazolidinyl urea (Germall II®), DMDM hydantoin (Glydant®), bromonitropropane diol (Bronopol™), tris(hydroxymethyl) nitromethane (Tris Nitro®), and sodium hydroxymethylglycinate. # Safety concerns Template:Expand Quaternium-15 is an allergen, and can cause contact dermatitis in susceptible individuals.[1] Many of those with an allergy to quaternium-15 are also allergic to formaldehyde. Allergic sensitivity to quaternium-15 can be detected using a patch test.[2]
https://www.wikidoc.org/index.php/Quaternium-15
18f49cfde517ebdc4e47ebe86d13a8489969fd00
wikidoc
Quest Academy
Quest Academy Quest Academy is a small independent school for gifted and talented children located in Palatine, Illinois. The school is accredited by the Independent Schools Association of the Central States, and it is a member of the National Association of Independent Schools and the National Association for Gifted Children. The standard tuition for one child for one year is $16,000 USD. Financial aid, awarded on a need basis, is available for those who require it. There are two classes per grade, but only two preschool classes in all. There are two stories in the building, and grades 3rd through 8th are on the second floor. There is a separate building, that was part of a plaza, where the band sessions are held. The preschool classes are also in a separate building. There is a store where you can buy Quest apparel and other such items. # History The school, first known as Creative Children's Academy, was started in 1982 by parents looking for an educational option for gifted children struggling in public schools. The school was awarded full accreditation by the Independent Schools Association of the Central States (ISACS) in 1988. In 1993, the park district which then owned the school's facility announced its decision to raze the building. Two school administrators agreed to share the school board's purchase of the former Palatine Public Library, which would be remodeled into a school facility, as well as the head of school position. The school's name was changed to Quest Academy in 1999 and a capital campaign funded the addition of a gymnasium and performing arts wing. # Campus # Extracurricular activities Quest Academy has a no-cut sports policy. Its middle school athletic teams include boys' and girls' cross country, basketball, soccer, volleyball, and track. These teams compete against other small, independent schools with similar philosophies. Other extracurricular activities at Quest Academy include journalism club, the Knight Program, buddy groups, Birthday Bomb Club, etc. The journalism club produces a student newspaper as well as Myriastella, a yearly publication of student writings and poems. The student council program is called the Knight Program. To become a knight, a student must complete a community service project. Schoolwide "pageant" assemblies are held on every first Monday of the month, where new knights are recognized and "squires" are recognized for displaying good character. Other clubs include: An extensive community service program is in place There is also an after school special called stock marketing. 2 people from the school won a prize. Quest also competes in several math competitions, one of which is at The Latin School of Chicago, and one of which is Mathcounts. Quest Academy provides gifted students with an environment in which they can work at their level in various subjects. The curriculum covers mathematics, language arts, social studies, art, drama, science, music, French, technology, and library. Elective trimester-long classes are also offered several days a week.
Quest Academy Template:Infobox School Quest Academy is a small independent school for gifted and talented children located in Palatine, Illinois.[1] The school is accredited by the Independent Schools Association of the Central States, and it is a member of the National Association of Independent Schools and the National Association for Gifted Children. The standard tuition for one child for one year is $16,000 USD. Financial aid, awarded on a need basis, is available for those who require it. There are two classes per grade, but only two preschool classes in all. There are two stories in the building, and grades 3rd through 8th are on the second floor. There is a separate building, that was part of a plaza, where the band sessions are held. The preschool classes are also in a separate building. There is a store where you can buy Quest apparel and other such items. # History The school, first known as Creative Children's Academy, was started in 1982 by parents looking for an educational option for gifted children struggling in public schools. The school was awarded full accreditation by the Independent Schools Association of the Central States (ISACS) in 1988. In 1993, the park district which then owned the school's facility announced its decision to raze the building. Two school administrators agreed to share the school board's purchase of the former Palatine Public Library, which would be remodeled into a school facility, as well as the head of school position. The school's name was changed to Quest Academy in 1999 and a capital campaign funded the addition of a gymnasium and performing arts wing.[2] # Campus Template:Section-stub # Extracurricular activities Quest Academy has a no-cut sports policy. Its middle school athletic teams include boys' and girls' cross country, basketball, soccer, volleyball, and track. These teams compete against other small, independent schools with similar philosophies. Other extracurricular activities at Quest Academy include journalism club, the Knight Program, buddy groups, Birthday Bomb Club, etc. The journalism club produces a student newspaper as well as Myriastella, a yearly publication of student writings and poems. The student council program is called the Knight Program. To become a knight, a student must complete a community service project. Schoolwide "pageant" assemblies are held on every first Monday of the month, where new knights are recognized and "squires" are recognized for displaying good character. Other clubs include: An extensive community service program is in place There is also an after school special called stock marketing. 2 people from the school won a prize. Quest also competes in several math competitions, one of which is at The Latin School of Chicago, and one of which is Mathcounts. Quest Academy provides gifted students with an environment in which they can work at their level in various subjects. The curriculum covers mathematics, language arts, social studies, art, drama, science, music, French, technology, and library. Elective trimester-long classes are also offered several days a week.
https://www.wikidoc.org/index.php/Quest_Academy
fa504b02b6489da3a1845b47951bd64ed22bcbde
wikidoc
REGRESS Trial
REGRESS Trial # Objective To evaluate the effects of cholesterol lowering therapy, using a hydroxymethyl glutaryl coenzyme A reductase inhibitor (pravastatin) in symptomatic men with coronary artery disease (CAD). # Methods Regression Growth Evaluation Statin Study (REGRESS) was a multicentered, prospective, double-blinded, randomized, placebo-controlled trial that enrolled 885 men with established coronary artery disease with total cholesterol levels in the range of 155 and 310 mg/dL. The patients were randomized into two groups, treatment and control and followed up for two years. Effect of pravastatin on progression and regression of coronary atherosclerosis was assessed by quantitative coronary arteriography. All the patients received routine antianginal treatment for the duration of the trial. # Results Percent diameter stenosis before angioplasty was 78 +/- 14% (mean +/- SD) in the pravastatin group and 80 +/- 14% in the placebo group (p = 0.46). At follow-up, the percent diameter stenosis was 32 +/- 23% in the pravastatin group and 45 +/- 29% in the placebo group (p < 0.001). Clinical restenosis was significantly lower in the pravastatin group (7%) compared with the placebo group (29%) (p < 0.001). # Conclusion In symptomatic men with significant coronary artery disease and normal to moderately elevated serum cholesterol, less progression of coronary atherosclerosis and fewer new cardiovascular events were observed in the group of patients treated with pravastatin than in the placebo group.
REGRESS Trial Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1] # Objective To evaluate the effects of cholesterol lowering therapy, using a hydroxymethyl glutaryl coenzyme A reductase inhibitor (pravastatin) in symptomatic men with coronary artery disease (CAD). # Methods Regression Growth Evaluation Statin Study (REGRESS) was a multicentered, prospective, double-blinded, randomized, placebo-controlled trial that enrolled 885 men with established coronary artery disease with total cholesterol levels in the range of 155 and 310 mg/dL. The patients were randomized into two groups, treatment and control and followed up for two years. Effect of pravastatin on progression and regression of coronary atherosclerosis was assessed by quantitative coronary arteriography. All the patients received routine antianginal treatment for the duration of the trial. # Results Percent diameter stenosis before angioplasty was 78 +/- 14% (mean +/- SD) in the pravastatin group and 80 +/- 14% in the placebo group (p = 0.46). At follow-up, the percent diameter stenosis was 32 +/- 23% in the pravastatin group and 45 +/- 29% in the placebo group (p < 0.001). Clinical restenosis was significantly lower in the pravastatin group (7%) compared with the placebo group (29%) (p < 0.001). # Conclusion In symptomatic men with significant coronary artery disease and normal to moderately elevated serum cholesterol, less progression of coronary atherosclerosis and fewer new cardiovascular events were observed in the group of patients treated with pravastatin than in the placebo group.[1][2][3][4]
https://www.wikidoc.org/index.php/REGRESS_Trial
5e7eab58c53383d7ca8b1b9b4acaf47b4f9e6346
wikidoc
Race for Life
Race for Life Race for Life is a series of UK-wide women-only fundraising events organised by the British charity Cancer Research UK. Although participation is limited to women, men can get involved by volunteering and marshalling at the event . Race for Life involves running, jogging or walking a 5-kilometre course and raising sponsorship from friends and family for doing so. The money raised is donated to the charity and funds cancer research and campaigns. The first Race for Life event took place in 1994 when 680 participants participated in a race in Battersea Park, London and raised £36,000. Race for Life has subsequently grown to become one of the UK's largest fundraising events, which in 2006 involved 240 races and 750,000 participants and raised £46 million. Since its inception, Race for Life has raised over £100 million for the charity. Cancer Research UK's Bobby Moore Fund also organises a similar event for men, Run for Moore. The proceeds from this event go towards bowel cancer research and campaigns.
Race for Life Race for Life is a series of UK-wide women-only fundraising events organised by the British charity Cancer Research UK. Although participation is limited to women, men can get involved by volunteering and marshalling at the event [1]. Race for Life involves running, jogging or walking a 5-kilometre course and raising sponsorship from friends and family for doing so. The money raised is donated to the charity and funds cancer research and campaigns. The first Race for Life event took place in 1994 when 680 participants participated in a race in Battersea Park, London and raised £36,000. Race for Life has subsequently grown to become one of the UK's largest fundraising events, which in 2006 involved 240 races and 750,000 participants and raised £46 million. Since its inception, Race for Life has raised over £100 million for the charity. Cancer Research UK's Bobby Moore Fund also organises a similar event for men, Run for Moore. The proceeds from this event go towards bowel cancer research and campaigns[2]. # External links - Race for Life website - Cancer Research UK website - Race for Life's myspace profile
https://www.wikidoc.org/index.php/Race_for_Life
91b44aa889241af680e799f7024c2e070782f066
wikidoc
Rachel Morris
Rachel Morris Rachel Morris DHP, MCAP is a British psychotherapist and counsellor who practises in Manchester, UK. Morris has appeared as an expert on several television programmes including Little Angels and Say No to the Knife for BBC Three, Would Like to Meet, The Oprah Winfrey Show, Big Brother's Little Brother and the Big Brother Psych Show, as well as on BBC Radio 1's Sunday Surgery. She is also a consultant for Cosmopolitan and for the BBC website's Relationships section.
Rachel Morris Rachel Morris DHP, MCAP is a British psychotherapist and counsellor who practises in Manchester, UK.[1] Morris has appeared as an expert on several television programmes including Little Angels and Say No to the Knife for BBC Three,[2][3] Would Like to Meet, The Oprah Winfrey Show, Big Brother's Little Brother and the Big Brother Psych Show,[4] as well as on BBC Radio 1's Sunday Surgery.[1] She is also a consultant for Cosmopolitan[5][6] and for the BBC website's Relationships section.[4]
https://www.wikidoc.org/index.php/Rachel_Morris
f7596acad8c0ab2411b64ccdfac658e4d24dbea2
wikidoc
Radial artery
Radial artery # Overview In human anatomy, the radial artery is the main blood vessel, with oxygenated blood, of the lateral aspect of the forearm. # Course The radial artery arises from the bifurcation of the brachial artery in the cubital fossa. It runs distally down the anterior part of the forearm. There, it serves as a landmark for the division between the anterior and posterior compartments of the forearm, with the posterior compartment beginning just lateral to the artery. The artery winds laterally around the wrist, passing through the anatomical snuff box and between the heads of the first dorsal interosseous muscle. It passes anteriorly between the heads of the adductor pollicis, and becomes the deep palmar arch, which joins with the deep branch of the ulnar artery. Along its course, it is accompanied by a similarly named vein, the radial vein. # Branches The named branches of the radial artery may be divided into three groups, corresponding with the three regions in which the vessel is situated. ## In the Forearm - Radial recurrent artery - arises just after the radial artery comes off the brachial artery. It travels superiorly to anastomose with the radial collateral artery. - Palmar carpal branch of radial artery - a small vessel which arises near the lower border of the pronator quadratus - Superficial palmar branch of the radial artery - arises from the radial artery, just where this vessel is about to wind around the lateral side of the wrist. ## At the Wrist - Dorsal carpal branch of radial artery - a small vessel which arises beneath the extensor tendons of the thumb - First dorsal metacarpal artery - arises just before the radial artery passes between the two heads of the first dorsal interosseous muscle and divides almost immediately into two branches which supply the adjacent sides of the thumb and index finger; the lateral side of the thumb receives a branch directly from the radial artery. ## In the Hand - Princeps pollicis artery - arises from the radial artery just as it turns medially to the deep part of the hand. - Radialis indicis - arises close to the princeps pollicis. The two arteries may arise from a common trunk, the first palmar metacarpal artery. - Deep palmar arch - terminal part of radial artery. # Clinical significance The artery's pulse is palpable in the anatomical snuff box and on the anterior aspect of the arm over the carpal bones (where it is commonly used to assess the heart rate and cardiac rhythm). The radial artery is used for coronary artery bypass grafting and is growing in popularity among cardiac surgeons. Recently, it has been shown to have a superior peri-operative and post-operative course when compared to saphenous vein grafts.
Radial artery Template:Infobox Artery Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1] # Overview In human anatomy, the radial artery is the main blood vessel, with oxygenated blood, of the lateral aspect of the forearm. # Course The radial artery arises from the bifurcation of the brachial artery in the cubital fossa. It runs distally down the anterior part of the forearm. There, it serves as a landmark for the division between the anterior and posterior compartments of the forearm, with the posterior compartment beginning just lateral to the artery. The artery winds laterally around the wrist, passing through the anatomical snuff box and between the heads of the first dorsal interosseous muscle. It passes anteriorly between the heads of the adductor pollicis, and becomes the deep palmar arch, which joins with the deep branch of the ulnar artery. Along its course, it is accompanied by a similarly named vein, the radial vein. # Branches The named branches of the radial artery may be divided into three groups, corresponding with the three regions in which the vessel is situated. ## In the Forearm - Radial recurrent artery - arises just after the radial artery comes off the brachial artery. It travels superiorly to anastomose with the radial collateral artery. - Palmar carpal branch of radial artery - a small vessel which arises near the lower border of the pronator quadratus - Superficial palmar branch of the radial artery - arises from the radial artery, just where this vessel is about to wind around the lateral side of the wrist. ## At the Wrist - Dorsal carpal branch of radial artery - a small vessel which arises beneath the extensor tendons of the thumb - First dorsal metacarpal artery - arises just before the radial artery passes between the two heads of the first dorsal interosseous muscle and divides almost immediately into two branches which supply the adjacent sides of the thumb and index finger; the lateral side of the thumb receives a branch directly from the radial artery. ## In the Hand - Princeps pollicis artery - arises from the radial artery just as it turns medially to the deep part of the hand. - Radialis indicis - arises close to the princeps pollicis. The two arteries may arise from a common trunk, the first palmar metacarpal artery. - Deep palmar arch - terminal part of radial artery. # Clinical significance The artery's pulse is palpable in the anatomical snuff box and on the anterior aspect of the arm over the carpal bones (where it is commonly used to assess the heart rate and cardiac rhythm). The radial artery is used for coronary artery bypass grafting and is growing in popularity among cardiac surgeons.[2] Recently, it has been shown to have a superior peri-operative and post-operative course when compared to saphenous vein grafts.[3]
https://www.wikidoc.org/index.php/Radial_artery
0dee51bd618f379ce6ff880c95cccae8bf861b7e
wikidoc
Ran (protein)
Ran (protein) Ran (RAs-related Nuclear protein) also known as GTP-binding nuclear protein Ran is a protein that in humans is encoded by the RAN gene. Ran is a small 25 kDa protein that is involved in transport into and out of the cell nucleus during interphase and also involved in mitosis. It is a member of the Ras superfamily. Ran is a small G protein that is essential for the translocation of RNA and proteins through the nuclear pore complex. The Ran protein has also been implicated in the control of DNA synthesis and cell cycle progression, as mutations in Ran have been found to disrupt DNA synthesis. # Function ## Ran cycle Ran exists in the cell in two nucleotide-bound forms: GDP-bound and GTP-bound. RanGDP is converted into RanGTP through the action of RCC1, the nucleotide exchange factor for Ran. RCC1 is also known as RanGEF (Ran Guanine nucleotide Exchange Factor). Ran's intrinsic GTPase-activity is activated through interaction with Ran GTPase activating protein (RanGAP), facilitated by complex formation with Ran-binding protein (RanBP). GTPase-activation leads to the conversion of RanGTP to RanGDP, thus closing the Ran cycle. Ran can diffuse freely within the cell, but because RCC1 and RanGAP are located in different places in the cell, the concentration of RanGTP and RanGDP differs locally as well, creating concentration gradients that act as signals for other cellular processes. RCC1 is bound to chromatin and therefore located inside the nucleus. RanGAP is cytoplasmic in yeast and bound to the nuclear envelope in plants and animals. In mammalian cells, it is SUMO modified and attached to the cytoplasmic side of the nuclear pore complex via interaction with the nucleoporin RanBP2 (Nup358). This difference in location of the accessory proteins in the Ran cycle leads to a high RanGTP to RanGDP ratio inside the nucleus and an inversely low RanGTP to RanGDP ratio outside the nucleus. In addition to a gradient of the nucleotide bound state of Ran, there is a gradient of the protein itself, with a higher concentration of Ran in the nucleus than in the cytoplasm. Cytoplasmic RanGDP is imported into the nucleus by the small protein NTF2 (Nuclear Transport Factor 2), where RCC1 can then catalyze exchange of GDP for GTP on Ran. ## Role in nuclear transport during interphase Ran is involved in the transport of proteins across the nuclear envelope by interacting with karyopherins and changing their ability to bind or release cargo molecules. Cargo proteins containing a nuclear localization signal (NLS) are bound by importins and transported into the nucleus. Inside the nucleus, RanGTP binds to importin and releases the import cargo. Cargo that needs to get out of the nucleus into the cytoplasm binds to exportin in a ternary complex with RanGTP. Upon hydrolysis of RanGTP to RanGDP outside the nucleus, the complex dissociates and export cargo is released. ## Role in mitosis During mitosis, the Ran cycle is involved in mitotic spindle assembly and nuclear envelope reassembly after the chromosomes have been separated. During prophase, the steep gradient in RanGTP-RanGDP ratio at the nuclear pores breaks down as the nuclear envelope becomes leaky and disassembles. RanGTP concentration stays high around the chromosomes as RCC1, a nucleotide exchange factor, stays attached to chromatin. RanBP2 (Nup358) and RanGAP move to the kinetochores where they facilitate the attachment of spindle fibers to chromosomes. Moreover, RanGTP promotes spindle assembly by mechanisms similar to mechanisms of nuclear transport: the activity of spindle assembly factors such as NuMA and TPX2 is inhibited by the binding to importins. By releasing importins, RanGTP activates these factors and therefore promotes the assembly of the mitotic spindle . In telophase, RanGTP hydrolysis and nucleotide exchange are required for vesicle fusion at the reforming nuclear envelopes of the daughter nuclei. ## Ran and the androgen receptor RAN is an androgen receptor (AR) coactivator (ARA24) that binds differentially with different lengths of polyglutamine within the androgen receptor. Polyglutamine repeat expansion in the AR is linked to spinal and bulbar muscular atrophy (Kennedy's disease). RAN coactivation of the AR diminishes with polyglutamine expansion within the AR, and this weak coactivation may lead to partial androgen insensitivity during the development of spinal and bulbar muscular atrophy. # Interactions Ran has been shown to interact with: - KPNB1, - NEK9, - NUTF2, - RANBP1, - RANGAP1, - RCC1, - TNPO1, - TNPO2, - XPO1, and - XPO5. # Regulation The expression of Ran is repressed by the microRNA miR-10a.
Ran (protein) Ran (RAs-related Nuclear protein) also known as GTP-binding nuclear protein Ran is a protein that in humans is encoded by the RAN gene. Ran is a small 25 kDa protein that is involved in transport into and out of the cell nucleus during interphase and also involved in mitosis. It is a member of the Ras superfamily.[1][2][3] Ran is a small G protein that is essential for the translocation of RNA and proteins through the nuclear pore complex. The Ran protein has also been implicated in the control of DNA synthesis and cell cycle progression, as mutations in Ran have been found to disrupt DNA synthesis.[4] # Function ## Ran cycle Ran exists in the cell in two nucleotide-bound forms: GDP-bound and GTP-bound. RanGDP is converted into RanGTP through the action of RCC1, the nucleotide exchange factor for Ran. RCC1 is also known as RanGEF (Ran Guanine nucleotide Exchange Factor). Ran's intrinsic GTPase-activity is activated through interaction with Ran GTPase activating protein (RanGAP), facilitated by complex formation with Ran-binding protein (RanBP). GTPase-activation leads to the conversion of RanGTP to RanGDP, thus closing the Ran cycle. Ran can diffuse freely within the cell, but because RCC1 and RanGAP are located in different places in the cell, the concentration of RanGTP and RanGDP differs locally as well, creating concentration gradients that act as signals for other cellular processes. RCC1 is bound to chromatin and therefore located inside the nucleus. RanGAP is cytoplasmic in yeast and bound to the nuclear envelope in plants and animals. In mammalian cells, it is SUMO modified and attached to the cytoplasmic side of the nuclear pore complex via interaction with the nucleoporin RanBP2 (Nup358). This difference in location of the accessory proteins in the Ran cycle leads to a high RanGTP to RanGDP ratio inside the nucleus and an inversely low RanGTP to RanGDP ratio outside the nucleus. In addition to a gradient of the nucleotide bound state of Ran, there is a gradient of the protein itself, with a higher concentration of Ran in the nucleus than in the cytoplasm. Cytoplasmic RanGDP is imported into the nucleus by the small protein NTF2 (Nuclear Transport Factor 2), where RCC1 can then catalyze exchange of GDP for GTP on Ran. ## Role in nuclear transport during interphase Ran is involved in the transport of proteins across the nuclear envelope by interacting with karyopherins and changing their ability to bind or release cargo molecules. Cargo proteins containing a nuclear localization signal (NLS) are bound by importins and transported into the nucleus. Inside the nucleus, RanGTP binds to importin and releases the import cargo. Cargo that needs to get out of the nucleus into the cytoplasm binds to exportin in a ternary complex with RanGTP. Upon hydrolysis of RanGTP to RanGDP outside the nucleus, the complex dissociates and export cargo is released. ## Role in mitosis During mitosis, the Ran cycle is involved in mitotic spindle assembly and nuclear envelope reassembly after the chromosomes have been separated.[5][6] During prophase, the steep gradient in RanGTP-RanGDP ratio at the nuclear pores breaks down as the nuclear envelope becomes leaky and disassembles. RanGTP concentration stays high around the chromosomes as RCC1, a nucleotide exchange factor, stays attached to chromatin.[7] RanBP2 (Nup358) and RanGAP move to the kinetochores where they facilitate the attachment of spindle fibers to chromosomes. Moreover, RanGTP promotes spindle assembly by mechanisms similar to mechanisms of nuclear transport: the activity of spindle assembly factors such as NuMA and TPX2 is inhibited by the binding to importins. By releasing importins, RanGTP activates these factors and therefore promotes the assembly of the mitotic spindle . In telophase, RanGTP hydrolysis and nucleotide exchange are required for vesicle fusion at the reforming nuclear envelopes of the daughter nuclei. ## Ran and the androgen receptor RAN is an androgen receptor (AR) coactivator (ARA24) that binds differentially with different lengths of polyglutamine within the androgen receptor. Polyglutamine repeat expansion in the AR is linked to spinal and bulbar muscular atrophy (Kennedy's disease). RAN coactivation of the AR diminishes with polyglutamine expansion within the AR, and this weak coactivation may lead to partial androgen insensitivity during the development of spinal and bulbar muscular atrophy.[8][9] # Interactions Ran has been shown to interact with: - KPNB1,[10][11][12] - NEK9,[13] - NUTF2,[14][15] - RANBP1,[10][16][17] - RANGAP1,[18][19][20] - RCC1,[16][17][21][22] - TNPO1,[23][24] - TNPO2,[24] - XPO1,[10][25][26] and - XPO5.[27] # Regulation The expression of Ran is repressed by the microRNA miR-10a.[28]
https://www.wikidoc.org/index.php/Ran_(protein)
4142be3d17b793fada96bfca0f8faff56d9817f9
wikidoc
Randomization
Randomization Randomization is the process of making something random; this can mean: - Generating a random permutation of a sequence (such as when shuffling cards). - Selecting a random sample of a population (important in statistical sampling). - Generating random numbers: see Random number generation. # Applications Randomization is used extensively in the field of gambling. Imperfect randomization may allow a skilled gambler to have an advantage, so much research has been devoted to effective randomization. A classic example of randomization is shuffling playing cards. Randomization is a core principle in the statistical theory of design of experiments. Its use was extensively promoted by R.A. Fisher in his book Statistical Methods for Research Workers. Randomization involves randomly allocating the experimental units across the treatment groups. Thus, if the experiment compares a new drug against a standard drug used as a control, the patients should be allocated to new drug or control by a random process. Randomization is not haphazard; it serves a purpose in both frequentist and Bayesian statistics. A frequentist would say that randomization reduces bias by equalising other factors that have not been explicitly accounted for in the experimental design. Considerations of bias are of little concern to Bayesians, who recommend randomization because it produces ignorable designs. In design of experiments, frequentists prefer Completely Randomized Designs. Other experimental designs are used when a full randomization is not possible. These cases include experiments that involve blocking and experiments that have hard-to-change factors. # Techniques Although historically "manual" randomization techniques (such as shuffling cards, drawing pieces of paper from a bag, spinning a roulette wheel) were common, nowadays automated techniques are mostly used. As both selecting random samples and random permutations can be reduced to simply selecting random numbers, random number generation methods are now most commonly used, both hardware random number generators and pseudo-random number generators. Non-algorithmic randomization methods include: - Casting yarrow stalks (for the I Ching) - Throwing dice - Drawing straws - Shuffling cards - Roulette wheels - Drawing pieces of paper or balls from a bag - "Lottery machines" - Observing atomic decay using a radiation counter # Links - RQube - Generate quasi-random stimulus sequences for experimental designs - RandList - Randomization List Generator
Randomization Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1] Randomization is the process of making something random; this can mean: - Generating a random permutation of a sequence (such as when shuffling cards). - Selecting a random sample of a population (important in statistical sampling). - Generating random numbers: see Random number generation. # Applications Randomization is used extensively in the field of gambling. Imperfect randomization may allow a skilled gambler to have an advantage, so much research has been devoted to effective randomization. A classic example of randomization is shuffling playing cards. Randomization is a core principle in the statistical theory of design of experiments. Its use was extensively promoted by R.A. Fisher in his book Statistical Methods for Research Workers. Randomization involves randomly allocating the experimental units across the treatment groups. Thus, if the experiment compares a new drug against a standard drug used as a control, the patients should be allocated to new drug or control by a random process. Randomization is not haphazard; it serves a purpose in both frequentist and Bayesian statistics. A frequentist would say that randomization reduces bias by equalising other factors that have not been explicitly accounted for in the experimental design. Considerations of bias are of little concern to Bayesians, who recommend randomization because it produces ignorable designs. In design of experiments, frequentists prefer Completely Randomized Designs. Other experimental designs are used when a full randomization is not possible. These cases include experiments that involve blocking and experiments that have hard-to-change factors. # Techniques Although historically "manual" randomization techniques (such as shuffling cards, drawing pieces of paper from a bag, spinning a roulette wheel) were common, nowadays automated techniques are mostly used. As both selecting random samples and random permutations can be reduced to simply selecting random numbers, random number generation methods are now most commonly used, both hardware random number generators and pseudo-random number generators. Non-algorithmic randomization methods include: - Casting yarrow stalks (for the I Ching) - Throwing dice - Drawing straws - Shuffling cards - Roulette wheels - Drawing pieces of paper or balls from a bag - "Lottery machines" - Observing atomic decay using a radiation counter # Links - RQube - Generate quasi-random stimulus sequences for experimental designs - RandList - Randomization List Generator
https://www.wikidoc.org/index.php/Randomization
b01ddc3deffbd88a2fc6878b1cc65c08ffc8c666
wikidoc
Rankine scale
Rankine scale Rankine is a thermodynamic (absolute) temperature scale named after the Scottish engineer and physicist William John Macquorn Rankine, who proposed it in 1859. The symbol is R (or Ra if necessary to distinguish it from the Rømer and Réaumur scales). As with the Kelvin scale (symbol: K), zero on the Rankine scale is absolute zero, but the Rankine degree is defined as equal to one degree Fahrenheit, rather than the one degree Celsius used by the Kelvin scale. A temperature of -459.67 °F is exactly equal to 0 R. A few engineering fields in the U.S. measure thermodynamic temperature using the Rankine scale. However, throughout the scientific world where measurements are made in SI units, thermodynamic temperature is measured in kelvin. Some key temperatures relating the Rankine scale to other temperature scales are shown in the table below.
Rankine scale Template:Temperature Rankine is a thermodynamic (absolute) temperature scale named after the Scottish engineer and physicist William John Macquorn Rankine, who proposed it in 1859. The symbol is R (or Ra if necessary to distinguish it from the Rømer and Réaumur scales). As with the Kelvin scale (symbol: K), zero on the Rankine scale is absolute zero, but the Rankine degree is defined as equal to one degree Fahrenheit, rather than the one degree Celsius used by the Kelvin scale. A temperature of -459.67 °F is exactly equal to 0 R. A few engineering fields in the U.S. measure thermodynamic temperature using the Rankine scale. However, throughout the scientific world where measurements are made in SI units, thermodynamic temperature is measured in kelvin. Some key temperatures relating the Rankine scale to other temperature scales are shown in the table below. Template:Measurement-stub
https://www.wikidoc.org/index.php/Rankine_scale
2be4432c917d22610c6930560ac5d6e73bc55886
wikidoc
Ranunculaceae
Ranunculaceae Ranunculaceae is a family of flowering plants also known as the "buttercup family" or "crowfoot family". The family name is derived from the genus Ranunculus. Members include Anemone (anemones), Ranunculus (buttercups), Aconitum (aconite), and Clematis. Ranuncula is Late Latin for "little frog," the diminutive of rana. According to the database of the Royal Botanic Gardens, Kew, the family consists of 51 to 88 genera, totalling about 2500 species. Numerically the most important genera are Ranunculus (600 species), Delphinium (365 species), Thalictrum (330 species), Clematis (325 species), and Aconitum (300 species). Ranunculaceae can be found worldwide, but are most common in the temperate and cold areas of the northern hemisphere. The family contains many ornamental flowering plants common to the Himalaya, some of which are of medicinal value. # Taxonomy This family has been universally recognized by taxonomists, and the APG II system, of 2003 (unchanged from the APG system, of 1998), places it in the order Ranunculales, in the clade eudicots. The cladogram below has been proposed in APG II system according to recent molecular phylogeny. The genus Glaucidium was once put in its own family (Glaucidiaceae), but has been recently recognised as a primitive member of Ranunculaceae. Tamura (1993) recognised five subfamilies, mainly based on chromosomic and floral characteristics (Hydrastidoideae, Thalictroideae, Isopyroideae, Ranunculoideae, Helleboroideae). Hydrastidoideae and Glaucidioideae have only one species, Hydrastis canadense and Glaucidium palmatum respectively. Coptoideae has 17 species and Thalictroideae has 450, including Thalictrum and Aquilegia. The other genera (2025 species, 81% of the family) belong to Ranunculoideae. Some older classifications included Paeonia (peony) in Ranunculaceae but this genus is now placed in its own family, Paeoniaceae in order Saxifragales. Circaeaster and Kingdonia are now placed in Circaeasteraceae. # Description Ranunculaceae are mostly herbaceous plants, but with some woody climbers (such as Clematis) and subshrubs (e.g. Xanthorhiza). Leaves are very often more or less palmately compound. The flowers of the Ranunculaceae show what are considered in some systems of plant taxonomy to be typically primitive characteristics, although the classification scheme of the Angiosperm Phylogeny Group considers this family to be among the most basal of the derived Eudicots clade. They are generally showy and medium to large in size in order to attract pollinators and are actinomorphic or radially symmetrical, although in some genera (e. g. Aconitum, Consolida) they are zygomorphic or bilaterally symmetrical. The perianth is made of one or, more commonly, two whorls, often not clearly differentiated into a true calyx and corolla, the sepals may be joined and the petals are often evolved into spurred nectaries or otherwise modified. The flowers have many free stamens arranged in spirals and usually many free pistils. Flowers are most often grouped in terminal racemes, panicles or cymes. The fruit is most commonly a follicle (e. g. Helleborus, Nigella) or an achene (e. g. Ranunculus, Clematis). Ranunculaceae contain protoanemonin, which is toxic to humans and animals. Other poisonous or toxic compounds, alkaloids and glycosides, are also very common. # Uses Some Ranunculaceae are used as herbal medicines because of their alkaloids and glycosides, such as Hydrastis canadensis (goldenseal), whose root is used as a tonic. Many genera are well known as cultivated flowers, such as Aconitum (monkshood), Consolida (larkspur), Delphinium, Helleborus (Christmas rose), Trollius (globeflower). The seeds of Nigella sativa, are used as a spice in Indian and Middle Eastern cuisine. # Selected genera # Image gallery - Flower diagram of Aconitum napellus Flower diagram of Aconitum napellus - Aconitum napellus Aconitum napellus - Adonis aestivalis Adonis aestivalis - Helleborus orientalis (green nectaries) Helleborus orientalis (green nectaries) - Hydrastis canadensis Hydrastis canadensis - Follicles of Helleborus niger Follicles of Helleborus niger - Achenes of Ranunculus acris Achenes of Ranunculus acris - Ranunculus repens Ranunculus repens - Anemone narcissiflora Anemone narcissiflora - Consolida regalis Consolida regalis - Ranunculus trichophyllus Ranunculus trichophyllus # References and external links - Ranunculaceae in Topwalks - Ranunculaceae - Ranunculaceae in L. Watson and M.J. Dallwitz (1992 onwards). The families of flowering plants. - Flora of North America: Ranunculaceae - Flora of China: Ranunculaceae - NCBI Taxonomy Browser - links at CSDL, Texas - Japanese Ranunculaceae - Flavon's art gallery - Family Ranunculaceae Flowers in Israel - Stevens, P. F. (2001 onwards). Angiosperm Phylogeny Website. Version 7, May 2006 . - Template:It Sandro Pignatti, Flora d'Italia, Edagricole, Bologna 1982. ISBN 8850624492 - Tamura, M.: "Ranunculaceae.", en Kubitzki, K., Rohwer, J.G. & Bittrich, V. (Editores). The Families and Genera of Vascular Plants. II. Flowering Plants - Dicotyledons..- Springer-Verlag: Berlín, 1993.- ISBN 3-540-55509-9 - Strasburger, Noll, Schenck, Schimper: Lehrbuch der Botanik für Hochschulen. 4. Auflage, Gustav Fischer, Jena 1900, p. 459 (flower diagrams) bg:Лютикови ca:Ranunculàcia cs:Pryskyřníkovité da:Ranunkel-familien de:Hahnenfußgewächse et:Tulikalised eo:Ranunkolacoj fa:آلالگان ko:미나리아재비과 hsb:Maslenkowe rostliny id:Ranunculaceae it:Ranunculaceae he:נוריתיים ka:ბაიასებრნი lv:Gundegu dzimta lt:Vėdryniniai hu:Boglárkafélék mk:Лутичиња nl:Ranonkelfamilie no:Soleiefamilien nn:Soleiefamilien se:Fiskesrássišattut sl:Zlatičevke sr:Љутићи fi:Leinikkikasvit sv:Ranunkelväxter uk:Жовтецеві
Ranunculaceae Ranunculaceae is a family of flowering plants also known as the "buttercup family" or "crowfoot family". The family name is derived from the genus Ranunculus. Members include Anemone (anemones), Ranunculus (buttercups), Aconitum (aconite), and Clematis. Ranuncula is Late Latin for "little frog," the diminutive of rana. According to the database of the Royal Botanic Gardens, Kew, the family consists of 51 to 88 genera, totalling about 2500 species. Numerically the most important genera are Ranunculus (600 species), Delphinium (365 species), Thalictrum (330 species), Clematis (325 species), and Aconitum (300 species). Ranunculaceae can be found worldwide, but are most common in the temperate and cold areas of the northern hemisphere. The family contains many ornamental flowering plants common to the Himalaya, some of which are of medicinal value. # Taxonomy This family has been universally recognized by taxonomists, and the APG II system, of 2003 (unchanged from the APG system, of 1998), places it in the order Ranunculales, in the clade eudicots. The cladogram below has been proposed in APG II system according to recent molecular phylogeny. Template:Clade The genus Glaucidium was once put in its own family (Glaucidiaceae), but has been recently recognised as a primitive member of Ranunculaceae. Tamura (1993) recognised five subfamilies, mainly based on chromosomic and floral characteristics (Hydrastidoideae, Thalictroideae, Isopyroideae, Ranunculoideae, Helleboroideae). Hydrastidoideae and Glaucidioideae have only one species, Hydrastis canadense and Glaucidium palmatum respectively. Coptoideae has 17 species and Thalictroideae has 450, including Thalictrum and Aquilegia. The other genera (2025 species, 81% of the family) belong to Ranunculoideae. Some older classifications included Paeonia (peony) in Ranunculaceae but this genus is now placed in its own family, Paeoniaceae in order Saxifragales. Circaeaster and Kingdonia are now placed in Circaeasteraceae. # Description Ranunculaceae are mostly herbaceous plants, but with some woody climbers (such as Clematis) and subshrubs (e.g. Xanthorhiza). Leaves are very often more or less palmately compound. The flowers of the Ranunculaceae show what are considered in some systems of plant taxonomy to be typically primitive characteristics, although the classification scheme of the Angiosperm Phylogeny Group considers this family to be among the most basal of the derived Eudicots clade. They are generally showy and medium to large in size in order to attract pollinators and are actinomorphic or radially symmetrical, although in some genera (e. g. Aconitum, Consolida) they are zygomorphic or bilaterally symmetrical. The perianth is made of one or, more commonly, two whorls, often not clearly differentiated into a true calyx and corolla, the sepals may be joined and the petals are often evolved into spurred nectaries or otherwise modified. The flowers have many free stamens arranged in spirals and usually many free pistils. Flowers are most often grouped in terminal racemes, panicles or cymes. The fruit is most commonly a follicle (e. g. Helleborus, Nigella) or an achene (e. g. Ranunculus, Clematis). Ranunculaceae contain protoanemonin, which is toxic to humans and animals. Other poisonous or toxic compounds, alkaloids and glycosides, are also very common. # Uses Some Ranunculaceae are used as herbal medicines because of their alkaloids and glycosides, such as Hydrastis canadensis (goldenseal), whose root is used as a tonic. Many genera are well known as cultivated flowers, such as Aconitum (monkshood), Consolida (larkspur), Delphinium, Helleborus (Christmas rose), Trollius (globeflower). The seeds of Nigella sativa, are used as a spice in Indian and Middle Eastern cuisine. # Selected genera # Image gallery - Flower diagram of Aconitum napellus Flower diagram of Aconitum napellus - Aconitum napellus Aconitum napellus - Adonis aestivalis Adonis aestivalis - Helleborus orientalis (green nectaries) Helleborus orientalis (green nectaries) - Hydrastis canadensis Hydrastis canadensis - Follicles of Helleborus niger Follicles of Helleborus niger - Achenes of Ranunculus acris Achenes of Ranunculus acris - Ranunculus repens Ranunculus repens - Anemone narcissiflora Anemone narcissiflora - Consolida regalis Consolida regalis - Ranunculus trichophyllus Ranunculus trichophyllus # References and external links Template:Wikispecies - Ranunculaceae in Topwalks - Ranunculaceae - Ranunculaceae in L. Watson and M.J. Dallwitz (1992 onwards). The families of flowering plants. - Flora of North America: Ranunculaceae - Flora of China: Ranunculaceae - NCBI Taxonomy Browser - links at CSDL, Texas - Japanese Ranunculaceae - Flavon's art gallery - Family Ranunculaceae Flowers in Israel - Stevens, P. F. (2001 onwards). Angiosperm Phylogeny Website. Version 7, May 2006 [and more or less continuously updated since]. [1] - Template:It Sandro Pignatti, Flora d'Italia, Edagricole, Bologna 1982. ISBN 8850624492 - Tamura, M.: "Ranunculaceae.", en Kubitzki, K., Rohwer, J.G. & Bittrich, V. (Editores). The Families and Genera of Vascular Plants. II. Flowering Plants - Dicotyledons..- Springer-Verlag: Berlín, 1993.- ISBN 3-540-55509-9 - Strasburger, Noll, Schenck, Schimper: Lehrbuch der Botanik für Hochschulen. 4. Auflage, Gustav Fischer, Jena 1900, p. 459 (flower diagrams) bg:Лютикови ca:Ranunculàcia cs:Pryskyřníkovité da:Ranunkel-familien de:Hahnenfußgewächse et:Tulikalised eo:Ranunkolacoj fa:آلالگان ko:미나리아재비과 hsb:Maslenkowe rostliny id:Ranunculaceae it:Ranunculaceae he:נוריתיים ka:ბაიასებრნი lv:Gundegu dzimta lt:Vėdryniniai hu:Boglárkafélék mk:Лутичиња nl:Ranonkelfamilie no:Soleiefamilien nn:Soleiefamilien se:Fiskesrássišattut sl:Zlatičevke sr:Љутићи fi:Leinikkikasvit sv:Ranunkelväxter uk:Жовтецеві Template:WH Template:WS
https://www.wikidoc.org/index.php/Ranunculaceae
2a433b8c1a43660c771c5444019c2a0cd959d84f
wikidoc
Raphael House
Raphael House Raphael House is an innovative shelter in the Tenderloin, San Francisco, California which provides transitional housing and support programs for parents and children who are suffering from homelessness. Established in 1971 at Gough and McAllister Streets, Raphael House was the first shelter for homeless families in the city. It has been located on Sutter Street since 1977. It is a non-profit organization which accepts no government funding, relying upon San Francisco Bay Area philanthropy which has become increasingly innovative. (Not all offers of support, however, are accepted.) From 1979 through 1999, Raphael House also operated Brother Juniper's Restaurant, an on-site breakfast café named for Saint Juniper. Though it had brought Raphael House a welcome albeit small net profit for twenty years, the expense of renovating its kitchens and the need for additional space for the children's afterschool tutorial center combined to require its closure. Raphael House was established in 1971, at a time when there were no other shelters for homeless families in the city. And its focus — beginning with the brightly colored children's paintings that line the halls — is still emphatically on kids.     The building has the feel of an old-fashioned home, with braided rugs on the floors and flowered tablecloths in the dining rooms. The residents have agreed to spend several months in a benign, nurturing boot camp, with curfews and rules of behavior as well as counseling and practical training. Each of the 20-odd families living here has its own tiny room, but much of everyday life takes place communally, interlaced with the kinds of rituals that provide both stability and variety. Parents and children eat breakfast and dinner together, seated as families. In the evening, the little ones gather in the "Children's Garden" to listen to stories and then are led, pajama-clad, in a singing procession to their rooms. In mid-winter, Santa Claus pays a visit, bringing toys and good things to eat; so do St. Nicholas and Santa Lucia.     Raphael House exists entirely on contributions, along with the earnings of its thrift shop and Brother Juniper's restaurant. We started in the basement of what was obviously a former hospital, winding our way past piles of donated furniture and bins of bedding, which will accompany residents when they move into places of their own. The shelter has strong ties to the Eastern Orthodox Church, and I kept being introduced to bearded men in long black robes, including one tall beanpole of a priest, Executive Director Father David Lowell. We looked in at the day care center, which has just graduated four parents from its first class of licensed day care providers. The room was nearly empty — most of the children were on a field trip — but one tiny girl who had just awakened from her nap soberly waved a bottle of juice in our direction.     But what about the roof garden? It's a playground. The wooden deck is long enough for a six year old to get a good run across the middle, and at the sides large pots of plants wind in and out among playhouses and low climbing structures. In an area with little open space, the children of Raphael House can play in the sky.
Raphael House Raphael House is an innovative shelter in the Tenderloin, San Francisco, California[1][2] which provides transitional housing and support programs for parents and children who are suffering from homelessness. Established in 1971 at Gough and McAllister Streets,[3] Raphael House was the first shelter for homeless families in the city. It has been located on Sutter Street since 1977. It is a non-profit organization which accepts no government funding,[3][4] relying upon San Francisco Bay Area philanthropy which has become increasingly innovative.[5][6][7] (Not all offers of support,[8] however, are accepted.) From 1979 through 1999, Raphael House also operated Brother Juniper's Restaurant,[9] an on-site breakfast café named for Saint Juniper. Though it had brought Raphael House a welcome albeit small net profit for twenty years, the expense of renovating its kitchens and the need for additional space for the children's afterschool tutorial center combined to require its closure. Raphael House was established in 1971, at a time when there were no other shelters for homeless families in the city. And its focus — beginning with the brightly colored children's paintings that line the halls — is still emphatically on kids.     The building has the feel of an old-fashioned home, with braided rugs on the floors and flowered tablecloths in the dining rooms. The residents have agreed to spend several months in a benign, nurturing boot camp, with curfews and rules of behavior as well as counseling and practical training. Each of the 20-odd families living here has its own tiny room, but much of everyday life takes place communally, interlaced with the kinds of rituals that provide both stability and variety. Parents and children eat breakfast and dinner together, seated as families. In the evening, the little ones gather in the "Children's Garden" to listen to stories and then are led, pajama-clad, in a singing procession to their rooms. In mid-winter, Santa Claus pays a visit, bringing toys and good things to eat; so do St. Nicholas and Santa Lucia.     Raphael House exists entirely on contributions, along with the earnings of its thrift shop and [until 1999] Brother Juniper's restaurant. We started in the basement of what was obviously a former hospital, winding our way past piles of donated furniture and bins of bedding, which will accompany residents when they move into places of their own. The shelter has strong ties to the Eastern Orthodox Church, and I kept being introduced to bearded men in long black robes, including one tall beanpole of a priest, Executive Director Father David Lowell. We looked in at the day care center, which has just graduated four parents from its first class of licensed day care providers. The room was nearly empty — most of the children were on a field trip — but one tiny girl who had just awakened from her nap soberly waved a bottle of juice in our direction.     But what about the roof garden? It's a playground. The wooden deck is long enough for a six year old to get a good run across the middle, and at the sides large pots of plants wind in and out among playhouses and low climbing structures. In an area with little open space, the children of Raphael House can play in the sky.
https://www.wikidoc.org/index.php/Raphael_House
0b2cf1f15a97d3d43d101cee7885081dfe079f4e
wikidoc
Rapport (NLP)
Rapport (NLP) Rapport is one of the most important features or characteristics of unconscious human interaction. It is commonality of perspective, being in "sync", being on the same wavelength as the person you are talking to. This article discusses rapport from a neuro-linguistic programming (NLP) perspective # Explanation There are a number of techniques that are supposed to be beneficial in building rapport such as: matching your body language (ie, posture, gesture, and so forth); maintaining eye contact; and matching breathing rhythm. Some of these techniques are explored in neuro-linguistic programming. A classic if unusual example of rapport can be found in the book "Uncommon Therapy" by Jay Haley (ISBN 0-393-31031-0), about the psychotherapeutic intervention techniques of Milton Erickson. Erickson developed the ability to enter the world view of his patients and, from that vantage point (having established rapport), he was able to make extremely effective interventions (to help his patients overcome life problems). In Neuro-linguistic programming, Richard Bandler and John Grinder noticed that the family therapist Virginia Satir "matched her predicates (verbs, adverbs, and adjectives) to those used by her clients" They noticed Fritz Perls also did similar things with his clients. In addition Milton Erickson mirrored his clients body posture, and movements. However, due to post polio syndrome, Erickson had limited movement and was not able match his clients posture directly. Instead he would change his voice and head position in time with the client's movements. Bandler and Grinder stated that once mirroring was established, the therapist could then 'lead' the client by changing their own state and offering suggestions. It was, thus, a way to improve responsiveness and communication. # Quotes
Rapport (NLP) Template:Neuro-linguistic programming Rapport is one of the most important features or characteristics of unconscious human interaction. It is commonality of perspective, being in "sync", being on the same wavelength as the person you are talking to. This article discusses rapport from a neuro-linguistic programming (NLP) perspective # Explanation There are a number of techniques that are supposed to be beneficial in building rapport such as: matching your body language (ie, posture, gesture, and so forth); maintaining eye contact; and matching breathing rhythm. Some of these techniques are explored in neuro-linguistic programming. A classic if unusual example of rapport can be found in the book "Uncommon Therapy" by Jay Haley (ISBN 0-393-31031-0), about the psychotherapeutic intervention techniques of Milton Erickson. Erickson developed the ability to enter the world view of his patients and, from that vantage point (having established rapport), he was able to make extremely effective interventions (to help his patients overcome life problems). In Neuro-linguistic programming, Richard Bandler and John Grinder noticed that the family therapist Virginia Satir "matched her predicates (verbs, adverbs, and adjectives) to those used by her clients"[1] They noticed Fritz Perls also did similar things with his clients. In addition Milton Erickson mirrored his clients body posture, and movements. However, due to post polio syndrome, Erickson had limited movement and was not able match his clients posture directly. Instead he would change his voice and head position in time with the client's movements.[1] Bandler and Grinder stated that once mirroring was established, the therapist could then 'lead' the client by changing their own state and offering suggestions. It was, thus, a way to improve responsiveness and communication. # Quotes
https://www.wikidoc.org/index.php/Rapport_(NLP)
39407bf767f475a6a9b27e32eee80f5129a86fa7
wikidoc
Ras (protein)
Ras (protein) # Overview In molecular biology, Ras is the name of a protein, the gene that encodes it, and the family and superfamily (see Ras superfamily) of proteins to which it belongs. The ras oncogene is a signal transduction protein, which means that it communicates signals to other cells. Sometimes a DNA mutation turns the signal permanently on, which leads to unlimited cell growth and cancer. The Ras superfamily of small GTPases includes the Ras, Rho, Arf, Rab, and Ran families. # History The RAS genes were first identified as the transforming oncogenes, responsible for the cancer-causing activities of the Harvey (the HRAS oncogene) and Kirsten (KRAS) sarcoma viruses, by Edward M. Scolnick and colleagues at the National Institutes of Health (NIH). These viruses were discovered originally in rats during the 1960's by Jennifer Harvey and Werner Kirsten, respectively. In 1982, activated and transforming human RAS genes were discovered in human cancer cells by Geoffrey M. Cooper at Harvard, Mariano Barbacid and Stuart A. Aaronson at the NIH and by Robert A. Weinberg of MIT. Subsequent studies identified a third human RAS gene, designated NRAS, for its initial identification in human neuroblastoma cells. # Functions The three human RAS genes encode highly related 188 to 189 amino acid proteins, designated H-Ras, N-Ras and K-Ras4A and K-Ras4B (the two K-Ras proteins arise from alternative gene splicing). Ras proteins function as binary molecular switches that control intracellular signaling networks. Ras-regulated signal pathways control such processes as actin cytoskeletal integrity, proliferation, differentiation, cell adhesion, apoptosis, and cell migration. Ras and ras-related proteins are often deregulated in cancers, leading to increased invasion and metastasis, and decreased apoptosis. Ras activates a number of pathways but an especially important one seems to be the mitogen-activated protein (MAP) kinases, which themselves transmit signals downstream to other protein kinases and gene regulatory proteins. # Activated and inactivated forms Ras is a G protein (specifically a small GTPase): a regulatory GTP hydrolase that cycles between two conformations – an activated or inactivated form, respectively RAS-GTP and RAS-GDP. It is activated by guanine exchange factors (GEFs, eg. CDC25, SOS1 and SOS2, SDC25 in yeast), which are themselves activated by mitogenic signals and through feedback from Ras itself. A GEF usually heightens the dissociation rate of the nucleotide – while not changing the association rate (effectively lower the affinity of the nucleotide) – thereby promoting its exchange. The cellular concentration of GTP is much higher than that of GDP so the exchange is usually GDP vs. GTP. It is inactivated by GTPase-activating proteins (GAPs, the most frequently cited one being RasGAP), which increase the rate of GTP hydrolysis, returning RAS to its GDP-bound form, simultaneously releasing an inorganic phosphate. # Attachments Ras is attached to the cell membrane by prenylation, and in health is a key component in many pathways which couple growth factor receptors to downstream mitogenic effectors involved in cell proliferation or differentiation. The C-terminal CaaX box of Ras first gets farnesylated at its Cys residue in the cytosol and then inserted into the membrane of the endoplasmatic reticulum. The Tripeptid (aaX) is then cleaved from the C-terminus by a specific prenyl-protein specific endoprotease, the new C-terminus is then methylated by a methyltransferase. The so processed Ras is now transported to the plasma membrane. Most Ras forms are now further palmityolated, while K-Ras with its long positively charged stretch interacts electrostaticly with the membrane. # Ras in cancer Mutations in the Ras family of proto-oncogenes (comprising H-Ras, N-Ras and K-Ras) are very common, being found in 20% to 30% of all human tumours. ## Inappropriate activation of the gene Inappropriate activation of the gene has been shown to play a key role in signal transduction, proliferation and malignant transformation. Mutations in a number of different genes as well as RAS itself can have this effect. Oncogenes such as p210BCR-ABL or the growth receptor erbB are upstream of Ras, so if they are constitutively activated their signals will transduce through Ras. The tumour suppressor gene NF1 encodes a Ras-GAP – its mutation in neurofibromatosis will mean that Ras is less likely to be inactivated. Ras can also be amplified, although this only occurs occasionally in tumours. Finally, Ras oncogenes can be activated by point mutations so that its GTPase reaction can no longer be stimulated by GAP – this increases the half life of active Ras-GTP mutants. ## Constitutively active Ras Constitutively active Ras (RasD) is one which contains mutations that prevent GTP hydrolysis, thus locking Ras in a permanently 'On' state. The most common mutations are found at residue G12 in the P-loop and the catalytic residue Q61. - The glycine to valine mutation at residue 12 renders the GTPase domain of Ras insensitive to inactivation by GAP and thus stuck in the "on state". Ras requires a GAP for inactivation as it is a relatively poor catalyst on its own, as opposed to other G-domain-containing proteins such as the alpha subunit of heterotrimeric G proteins. - Residue 61 is responsible for stabilizing the transition state for GTP hydrolysis. Because enzyme catalysis in general is achieved by lowering the energy barrier between substrate and product, mutation of Q61 necessarily reduces the rate of intrinsic Ras GTP hydrolysis to physiologically meaningless levels. See also "dominant negative" mutants such as S17N and D119N. # Human proteins containing Ras domain ARHE; ARHGAP5; CDC42; DIRAS1; DIRAS2; DIRAS3; ERAS; GEM; GRLF1; HRAS; KRAS; LOC393004; MRAS; NKIRAS1; NRAS; RAB10; RAB11A; RAB11B; RAB12; RAB13; RAB14; RAB15; RAB17; RAB18; RAB19; RAB1A; RAB1B; RAB2; RAB20; RAB21; RAB22A; RAB23; RAB24; RAB25; RAB26; RAB27A; RAB27B; RAB28; RAB2B; RAB30; RAB31; RAB32; RAB33A; RAB33B; RAB34; RAB35; RAB36; RAB37; RAB38; RAB39; RAB39B; RAB3A; RAB3B; RAB3C; RAB3D; RAB40A; RAB40AL; RAB40B; RAB40C; RAB41; RAB42; RAB43; RAB4A; RAB4B; RAB5A; RAB5B; RAB5C; RAB6A; RAB6B; RAB6C; RAB7A; RAB7B; RAB7L1; RAB8A; RAB8B; RAB9; RAB9B; RABL2A; RABL2B; RABL4; RAC1; RAC2; RAC3; RALA; RALB; RAN; RANP1; RAP1A; RAP1B; RAP2A; RAP2B; RAP2C; RASD1; RASD2; RASEF; RASL11A; RASL12; RBJ; REM1; REM2; RERG; RHEB; RHEBL1; RHOA; RHOB; RHOBTB1; RHOBTB2; RHOC; RHOD; RHOF; RHOG; RHOH; RHOJ; RHOQ; RHOU; RHOV; RIT1; RIT2; RND1; RND2; RND3; RRAD; RRAS; RRAS2; TC4;
Ras (protein) Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1] # Overview In molecular biology, Ras is the name of a protein, the gene that encodes it, and the family and superfamily (see Ras superfamily) of proteins to which it belongs. The ras oncogene is a signal transduction protein, which means that it communicates signals to other cells. Sometimes a DNA mutation turns the signal permanently on, which leads to unlimited cell growth and cancer.[1] The Ras superfamily of small GTPases includes the Ras, Rho, Arf, Rab, and Ran families. # History The RAS genes were first identified as the transforming oncogenes, responsible for the cancer-causing activities of the Harvey (the HRAS oncogene) and Kirsten (KRAS) sarcoma viruses, by Edward M. Scolnick and colleagues at the National Institutes of Health (NIH). These viruses were discovered originally in rats during the 1960's by Jennifer Harvey and Werner Kirsten, respectively. In 1982, activated and transforming human RAS genes were discovered in human cancer cells by Geoffrey M. Cooper at Harvard, Mariano Barbacid and Stuart A. Aaronson at the NIH and by Robert A. Weinberg of MIT. Subsequent studies identified a third human RAS gene, designated NRAS, for its initial identification in human neuroblastoma cells. # Functions The three human RAS genes encode highly related 188 to 189 amino acid proteins, designated H-Ras, N-Ras and K-Ras4A and K-Ras4B (the two K-Ras proteins arise from alternative gene splicing). Ras proteins function as binary molecular switches that control intracellular signaling networks. Ras-regulated signal pathways control such processes as actin cytoskeletal integrity, proliferation, differentiation, cell adhesion, apoptosis, and cell migration. Ras and ras-related proteins are often deregulated in cancers, leading to increased invasion and metastasis, and decreased apoptosis. Ras activates a number of pathways but an especially important one seems to be the mitogen-activated protein (MAP) kinases, which themselves transmit signals downstream to other protein kinases and gene regulatory proteins.[2] # Activated and inactivated forms Ras is a G protein (specifically a small GTPase): a regulatory GTP hydrolase that cycles between two conformations – an activated or inactivated form, respectively RAS-GTP and RAS-GDP. It is activated by guanine exchange factors (GEFs, eg. CDC25, SOS1 and SOS2, SDC25 in yeast), which are themselves activated by mitogenic signals and through feedback from Ras itself. A GEF usually heightens the dissociation rate of the nucleotide – while not changing the association rate (effectively lower the affinity of the nucleotide) – thereby promoting its exchange. The cellular concentration of GTP is much higher than that of GDP so the exchange is usually GDP vs. GTP. It is inactivated by GTPase-activating proteins (GAPs, the most frequently cited one being RasGAP), which increase the rate of GTP hydrolysis, returning RAS to its GDP-bound form, simultaneously releasing an inorganic phosphate. # Attachments Ras is attached to the cell membrane by prenylation, and in health is a key component in many pathways which couple growth factor receptors to downstream mitogenic effectors involved in cell proliferation or differentiation.[3] The C-terminal CaaX box of Ras first gets farnesylated at its Cys residue in the cytosol and then inserted into the membrane of the endoplasmatic reticulum. The Tripeptid (aaX) is then cleaved from the C-terminus by a specific prenyl-protein specific endoprotease, the new C-terminus is then methylated by a methyltransferase. The so processed Ras is now transported to the plasma membrane. Most Ras forms are now further palmityolated, while K-Ras with its long positively charged stretch interacts electrostaticly with the membrane. # Ras in cancer Mutations in the Ras family of proto-oncogenes (comprising H-Ras, N-Ras and K-Ras) are very common, being found in 20% to 30% of all human tumours.[4] ## Inappropriate activation of the gene Inappropriate activation of the gene has been shown to play a key role in signal transduction, proliferation and malignant transformation.[2] Mutations in a number of different genes as well as RAS itself can have this effect. Oncogenes such as p210BCR-ABL or the growth receptor erbB are upstream of Ras, so if they are constitutively activated their signals will transduce through Ras. The tumour suppressor gene NF1 encodes a Ras-GAP – its mutation in neurofibromatosis will mean that Ras is less likely to be inactivated. Ras can also be amplified, although this only occurs occasionally in tumours. Finally, Ras oncogenes can be activated by point mutations so that its GTPase reaction can no longer be stimulated by GAP – this increases the half life of active Ras-GTP mutants.[3] ## Constitutively active Ras Constitutively active Ras (RasD) is one which contains mutations that prevent GTP hydrolysis, thus locking Ras in a permanently 'On' state. The most common mutations are found at residue G12 in the P-loop and the catalytic residue Q61. - The glycine to valine mutation at residue 12 renders the GTPase domain of Ras insensitive to inactivation by GAP and thus stuck in the "on state". Ras requires a GAP for inactivation as it is a relatively poor catalyst on its own, as opposed to other G-domain-containing proteins such as the alpha subunit of heterotrimeric G proteins. - Residue 61[5] is responsible for stabilizing the transition state for GTP hydrolysis. Because enzyme catalysis in general is achieved by lowering the energy barrier between substrate and product, mutation of Q61 necessarily reduces the rate of intrinsic Ras GTP hydrolysis to physiologically meaningless levels. See also "dominant negative" mutants such as S17N and D119N. # Human proteins containing Ras domain ARHE; ARHGAP5; CDC42; DIRAS1; DIRAS2; DIRAS3; ERAS; GEM; GRLF1; HRAS; KRAS; LOC393004; MRAS; NKIRAS1; NRAS; RAB10; RAB11A; RAB11B; RAB12; RAB13; RAB14; RAB15; RAB17; RAB18; RAB19; RAB1A; RAB1B; RAB2; RAB20; RAB21; RAB22A; RAB23; RAB24; RAB25; RAB26; RAB27A; RAB27B; RAB28; RAB2B; RAB30; RAB31; RAB32; RAB33A; RAB33B; RAB34; RAB35; RAB36; RAB37; RAB38; RAB39; RAB39B; RAB3A; RAB3B; RAB3C; RAB3D; RAB40A; RAB40AL; RAB40B; RAB40C; RAB41; RAB42; RAB43; RAB4A; RAB4B; RAB5A; RAB5B; RAB5C; RAB6A; RAB6B; RAB6C; RAB7A; RAB7B; RAB7L1; RAB8A; RAB8B; RAB9; RAB9B; RABL2A; RABL2B; RABL4; RAC1; RAC2; RAC3; RALA; RALB; RAN; RANP1; RAP1A; RAP1B; RAP2A; RAP2B; RAP2C; RASD1; RASD2; RASEF; RASL11A; RASL12; RBJ; REM1; REM2; RERG; RHEB; RHEBL1; RHOA; RHOB; RHOBTB1; RHOBTB2; RHOC; RHOD; RHOF; RHOG; RHOH; RHOJ; RHOQ; RHOU; RHOV; RIT1; RIT2; RND1; RND2; RND3; RRAD; RRAS; RRAS2; TC4;
https://www.wikidoc.org/index.php/Ras_(protein)
a2dac79beafd8292d8e16782841854741263913f
wikidoc
Rate equation
Rate equation The rate law or rate equation for a chemical reaction is an equation which links the reaction rate with concentrations or pressures of reactants and constant parameters (normally rate coefficients and partial reaction orders). To determine the rate equation for a particular system one combines the reaction rate with a mass balance for the system. For a generic reaction A + B → C the simple rate equation (as opposed to the much more common complicated rate equations) is of the form: In this equation, expresses the concentration of a given X, usually in mol/litre (molarity). The k(T) is the reaction rate coefficient or rate constant, although it is not really a constant, as it includes everything that affects reaction rate outside concentration such as temperature but also including ionic strength, surface area of the adsorbent or light irradiation. The exponents n and m are the reaction orders and depend on the reaction mechanism. The stoichiometric coefficients and the reaction orders are very often equal, but only in one step reactions, molecularity (number of molecules or atoms actually colliding), stoichiometry and reaction order must be the same. Complicated rate equations are not of the form above, and they can be a sum of terms like it or have quantities in the denominator (see further sections) The rate equation is a differential equation, and it can be integrated in order to obtain an integrated rate equation that links concentrations of reactants or products with time. If the concentration of one of the reactants remains constant (because it is a catalyst or it is in great excess with respect to the other reactants) its concentration can be included in the rate constant, obtaining a pseudo constant: if B is the reactant whose concentration is constant then r=k=k'. The second order rate equation has been reduced to a pseudo first order rate equation. This makes the treatment to obtain an integrated rate equation much easier. # Zero-order reactions A Zero-order reaction has a rate which is independent of the concentration of the reactant(s). Increasing the concentration of the reacting species will not speed up the rate of the reaction. Zero-order reactions are typically found when a material required for the reaction to proceed, such as a surface or a catalyst, is saturated by the reactants. The rate law for a zero-order reaction is where r is the reaction rate, and k is the reaction rate coefficient with units of concentration/time. If, and only if, this zero-order reaction 1) occurs in a closed system, 2) there is no net build-up of intermediates and 3) there are no other reactions occurring, it can be shown by solving a Mass balance for the system that: If this differential equation is integrated it gives an equation which is often called the integrated zero-order rate law where \ _t represents the concentration of the chemical of interest at a particular time, and \ _0 represents the initial concentration. A reaction is zero order if concentration data are plotted versus time and the result is a straight line. The slope of this resulting line is the negative of the zero order rate constant k. The half-life of a reaction describes the time needed for half of the reactant to be depleted (same as the half-life involved in nuclear decay, which is a first-order reaction). For a zero-order reaction the half-life is given by - Reversed Haber process: 2NH_3 (g) \rightarrow \; 3H_2 (g) + N_2 (g) # First-order reactions A first-order reaction depends on the concentration of only one reactant (a unimolecular reaction). Other reactants can be present, but each will be zero-order. The rate law for an elementary reaction that is first order with respect to a reactant A is k is the first order rate constant, which has units of 1/time. The integrated first-order rate law is A plot of \ln{} vs. time t gives a straight line with a slope of -k. The half life of a first-order reaction is independent of the starting concentration and is given by \ t_ \frac{1}{2} = \frac{\ln{(2)}}{k}. Examples of reactions that are first-order with respect to the reactant: - \mbox{H}_2 \mbox{O}_2 (l) \rightarrow \; \mbox{H}_2\mbox{O} (l) + \frac{1}{2}\mbox{O}_2 (g) - \mbox{SO}_2 \mbox{Cl}_2 (l) \rightarrow \; \mbox{SO}_2 (g) + \mbox{Cl}_2 (g) - 2\mbox{N}_2 \mbox{O}_5 (g) \rightarrow \; 4\mbox{NO}_2 (g) + \mbox{O}_2 (g) # Second-order reactions A second-order reaction depends on the concentrations of one second-order reactant, or two first-order reactants. For a second order reaction, its reaction rate is given by: The integrated second-order rate laws are respectively 0 and 0 must be different, in order to obtain that integrated equation. The half-life equation for a second-order reaction dependent on one second-order reactant is \ t_ \frac{1}{2} = \frac{1}{k_0}. For a second-order reaction half-lives progressively double. Another way to present the above rate laws is to take the log of both sides: \ln{}r = \ln{}k + 2\ln\left - 2\mbox{NO}_2(g) \rightarrow \; 2\mbox{NO}(g) + \mbox{O}_2(g) ## Pseudo first order Measuring a second order reaction rate can be problematic: the concentrations of the two reactants must be followed simultaneously, which is more difficult; or measure one of them and calculate the other as a difference, which is less precise. A common solution for that problem is the pseudo first order approximation If either or remain constant as the reaction proceeds, then the reaction can be considered pseudo first order because in fact it only depends on the concentration of one reactant. If for example remains constant then: \ r = k = k' where k'=k_0 (k' or kobs with units s-1) and we have an expression identical to the first order expression above. One way to obtain a pseudo first order reaction is to use a large excess of one of the reactants (>> would work for the previous example) so that, as the reaction progresses only a small amount of the reactant is consumed and its concentration can be considered to stay constant. By collecting k' for many reactions with different (but excess) concentrations of ; a plot of k' versus gives k (the regular second order rate constant) as the slope. # Summary for reaction orders 0, 1, 2 and n Reactions with order 3 (called ternary reactions) are very rare, and extremely unlikely to occur. The known ones almost always involve dinitrogen pentoxide (N2O5). Where M stands for concentration (mol · L−1), t for time, and k for for the reaction rate constant. # Equilibrium reactions or opposed reactions A pair of forward and reverse reactions may define an equilibrium process. For example A and B react into X and Y and vice versa (s, t, u and v are the stoichiometric coefficients): sA + tB Template:Unicode uX + vY The reaction rate expression for the above reactions (assuming they each are elementary) can be expressed as: where: k1 is the rate coefficient for the reaction which consumes A and B; k2 is the rate coefficient for the backwards reaction, which consumes X and Y and produces A and B. The constants k1 and k2 are related to the equilibrium coefficient for the reaction (K) by the following relationship (set r=0 in balance): In a simple equilibrium between two species: the constant K at equilibrium is expressed as: When the concentration of A at equilibrium is that of the concentration at time 0 minus the conversion in moles with x equal to the concentration of B at equilibrium then it follows that and The reaction rate becomes: which results in: A plot of the negative natural logarithm of the concentration of A in time minus the concentration at equilibrium versus time t gives a straight line with slope kf + kb. By measurement of Ae and Be the values of K and the two reaction rate constants will be known . When the equilibrium constant is close to unity and the reaction rates very fast for instance in conformational analysis of molecules, other methods are required for the determination of rate constants for instance by complete lineshape analysis in NMR spectroscopy. # Consecutive reactions If the rate constants for the following reaction are k_1 and k_2; A \rightarrow \; B \rightarrow \; C , then the rate equation is: For reactant A: \frac{d}{dt} = -k_1 For reactant B: \frac{d}{dt} = k_1 - k_2 For product C: \frac{d}{dt} = k_2 These differential equations can be solved analytically and the integrated rate equations (supposing that initial concentrations of every substance except A are zero) are =_0 e^{-k_1 t} =_0 \frac{k_1}{k_2 - k_1}\left ( e^{-k_1t}-e^{-k_2t} \right ) = \frac{_0}{k_2-k_1} \left \quad = _0 \left (1 + \frac{k_1 e^{-k_2t}-k_2e^{-k_1t}}{k_2-k_1} \right ) The steady state approximation leads to very similar results in an easier way. # Parallel or competitive reactions When a substance reacts simultaneously to give two different products, a parallel or competitive reaction is said to take place. - Two first order reactions: A \rightarrow \; B and A \rightarrow \; C , with constants k_1 and k_2 and rate equations -\frac{d}{dt}=(k_1+k_2), \frac{d}{dt}=k_1 and \frac{d}{dt}=k_2 The integrated rate equations are then \ = _0 e^{-(k_1+k_2)t}; = \frac{k_1}{k_1+k_2}_0 (1-e^{-(k_1+k_2)t}) and = \frac{k_2}{k_1+k_2}_0 (1-e^{-(k_1+k_2)t}). One important relationship in this case is \frac{}{}=\frac{k_1}{k_2} - One first order and one second order reaction: This can be the case when studying a bimolecular reaction and a simultaneous hydrolysis (which can be treated as pseudo order one) takes place: the hydrolysis complicates the study of the reaction kinetics, because some reactant is being "spent" in a parallel reaction. For example A reacts with R to give our product C, but meanwhile the hydrolysis reaction takes away an amount of A to give B, a byproduct: A + H_2O \rightarrow \ B and A + R \rightarrow \ C . The rate equations are: \frac{d}{dt}=k_1=k_1' and \frac{d}{dt}=k_2. Where k_1' is the pseudo first order constant. The integrated rate equation for the main product is =_0 \left _0(1-e^{-k_1't})} \right ] , which is equivalent to ln \frac{_0}{_0-}=\frac{k_2_0}{k_1'}(1-e^{-k_1't}). Concentration of B is related to that of C through =-\frac{k_1'}{k_2} ln \left ( 1 - \frac{}{_0} \right ) The integrated equations were analytically obtained but during the process it was assumed that _0-\approx \;_0 therefeore, previous equation for can only be used for low concentrations of compared to 0
Rate equation Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1] The rate law or rate equation for a chemical reaction is an equation which links the reaction rate with concentrations or pressures of reactants and constant parameters (normally rate coefficients and partial reaction orders). [1] To determine the rate equation for a particular system one combines the reaction rate with a mass balance for the system.[2] For a generic reaction A + B → C the simple rate equation (as opposed to the much more common complicated rate equations) is of the form: In this equation, <math>[X]</math> expresses the concentration of a given X, usually in mol/litre (molarity). The k(T) is the reaction rate coefficient or rate constant, although it is not really a constant, as it includes everything that affects reaction rate outside concentration such as temperature but also including ionic strength, surface area of the adsorbent or light irradiation. The exponents n and m are the reaction orders and depend on the reaction mechanism. The stoichiometric coefficients and the reaction orders are very often equal, but only in one step reactions, molecularity (number of molecules or atoms actually colliding), stoichiometry and reaction order must be the same. Complicated rate equations are not of the form above, and they can be a sum of terms like it or have quantities in the denominator (see further sections) The rate equation is a differential equation, and it can be integrated in order to obtain an integrated rate equation that links concentrations of reactants or products with time. If the concentration of one of the reactants remains constant (because it is a catalyst or it is in great excess with respect to the other reactants) its concentration can be included in the rate constant, obtaining a pseudo constant: if B is the reactant whose concentration is constant then <math> r=k[A][B]=k'[A]</math>. The second order rate equation has been reduced to a pseudo first order rate equation. This makes the treatment to obtain an integrated rate equation much easier. # Zero-order reactions A Zero-order reaction has a rate which is independent of the concentration of the reactant(s). Increasing the concentration of the reacting species will not speed up the rate of the reaction. Zero-order reactions are typically found when a material required for the reaction to proceed, such as a surface or a catalyst, is saturated by the reactants. The rate law for a zero-order reaction is where r is the reaction rate, and k is the reaction rate coefficient with units of concentration/time. If, and only if, this zero-order reaction 1) occurs in a closed system, 2) there is no net build-up of intermediates and 3) there are no other reactions occurring, it can be shown by solving a Mass balance for the system that: If this differential equation is integrated it gives an equation which is often called the integrated zero-order rate law where <math>\ [A]_t</math> represents the concentration of the chemical of interest at a particular time, and <math>\ [A]_0</math> represents the initial concentration. A reaction is zero order if concentration data are plotted versus time and the result is a straight line. The slope of this resulting line is the negative of the zero order rate constant k. The half-life of a reaction describes the time needed for half of the reactant to be depleted (same as the half-life involved in nuclear decay, which is a first-order reaction). For a zero-order reaction the half-life is given by - Reversed Haber process: <math>2NH_3 (g) \rightarrow \; 3H_2 (g) + N_2 (g)</math> # First-order reactions A first-order reaction depends on the concentration of only one reactant (a unimolecular reaction). Other reactants can be present, but each will be zero-order. The rate law for an elementary reaction that is first order with respect to a reactant A is k is the first order rate constant, which has units of 1/time. The integrated first-order rate law is A plot of <math>\ln{[A]}</math> vs. time t gives a straight line with a slope of <math>-k</math>. The half life of a first-order reaction is independent of the starting concentration and is given by <math>\ t_ \frac{1}{2} = \frac{\ln{(2)}}{k}</math>. Examples of reactions that are first-order with respect to the reactant: - <math>\mbox{H}_2 \mbox{O}_2 (l) \rightarrow \; \mbox{H}_2\mbox{O} (l) + \frac{1}{2}\mbox{O}_2 (g)</math> - <math>\mbox{SO}_2 \mbox{Cl}_2 (l) \rightarrow \; \mbox{SO}_2 (g) + \mbox{Cl}_2 (g)</math> - <math>2\mbox{N}_2 \mbox{O}_5 (g) \rightarrow \; 4\mbox{NO}_2 (g) + \mbox{O}_2 (g)</math> # Second-order reactions A second-order reaction depends on the concentrations of one second-order reactant, or two first-order reactants. For a second order reaction, its reaction rate is given by: The integrated second-order rate laws are respectively [A]0 and [B]0 must be different, in order to obtain that integrated equation. The half-life equation for a second-order reaction dependent on one second-order reactant is <math>\ t_ \frac{1}{2} = \frac{1}{k[A]_0}</math>. For a second-order reaction half-lives progressively double. Another way to present the above rate laws is to take the log of both sides: <math>\ln{}r = \ln{}k + 2\ln\left[A\right] </math> - <math>2\mbox{NO}_2(g) \rightarrow \; 2\mbox{NO}(g) + \mbox{O}_2(g)</math> ## Pseudo first order Measuring a second order reaction rate can be problematic: the concentrations of the two reactants must be followed simultaneously, which is more difficult; or measure one of them and calculate the other as a difference, which is less precise. A common solution for that problem is the pseudo first order approximation If either [A] or [B] remain constant as the reaction proceeds, then the reaction can be considered pseudo first order because in fact it only depends on the concentration of one reactant. If for example [B] remains constant then: <math>\ r = k[A][B] = k'[A]</math> where <math>k'=k[B]_0</math> (k' or kobs with units s-1) and we have an expression identical to the first order expression above. One way to obtain a pseudo first order reaction is to use a large excess of one of the reactants ([B]>>[A] would work for the previous example) so that, as the reaction progresses only a small amount of the reactant is consumed and its concentration can be considered to stay constant. By collecting <math>k'</math> for many reactions with different (but excess) concentrations of [B]; a plot of <math>k'</math> versus [B] gives <math>k</math> (the regular second order rate constant) as the slope. # Summary for reaction orders 0, 1, 2 and n Reactions with order 3 (called ternary reactions) are very rare, and extremely unlikely to occur. The known ones almost always involve dinitrogen pentoxide (N2O5).[citation needed] Where M stands for concentration (mol · L−1), t for time, and k for for the reaction rate constant. # Equilibrium reactions or opposed reactions A pair of forward and reverse reactions may define an equilibrium process. For example A and B react into X and Y and vice versa (s, t, u and v are the stoichiometric coefficients): sA + tB Template:Unicode uX + vY The reaction rate expression for the above reactions (assuming they each are elementary) can be expressed as: where: k1 is the rate coefficient for the reaction which consumes A and B; k2 is the rate coefficient for the backwards reaction, which consumes X and Y and produces A and B. The constants k1 and k2 are related to the equilibrium coefficient for the reaction (K) by the following relationship (set r=0 in balance): In a simple equilibrium between two species: the constant K at equilibrium is expressed as: When the concentration of A at equilibrium is that of the concentration at time 0 minus the conversion in moles with x equal to the concentration of B at equilibrium then it follows that and The reaction rate becomes: which results in: A plot of the negative natural logarithm of the concentration of A in time minus the concentration at equilibrium versus time t gives a straight line with slope kf + kb. By measurement of Ae and Be the values of K and the two reaction rate constants will be known [3]. When the equilibrium constant is close to unity and the reaction rates very fast for instance in conformational analysis of molecules, other methods are required for the determination of rate constants for instance by complete lineshape analysis in NMR spectroscopy. # Consecutive reactions If the rate constants for the following reaction are <math>k_1</math> and <math>k_2</math>; <math> A \rightarrow \; B \rightarrow \; C </math>, then the rate equation is: For reactant A: <math> \frac{d[A]}{dt} = -k_1 [A] </math> For reactant B: <math> \frac{d[B]}{dt} = k_1 [A] - k_2 [B]</math> For product C: <math> \frac{d[C]}{dt} = k_2 [B]</math> These differential equations can be solved analytically and the integrated rate equations (supposing that initial concentrations of every substance except A are zero) are <math>[A]=[A]_0 e^{-k_1 t}</math> <math>[B]=[A]_0 \frac{k_1}{k_2 - k_1}\left ( e^{-k_1t}-e^{-k_2t} \right )</math> <math> [C] = \frac{[A]_0}{k_2-k_1} \left [ k_2 \left ( 1- e^{-k_1t} \right ) - k_1 \left (1- e^{-k_2t} \right ) \right ] \quad = [A]_0 \left (1 + \frac{k_1 e^{-k_2t}-k_2e^{-k_1t}}{k_2-k_1} \right )</math> The steady state approximation leads to very similar results in an easier way. # Parallel or competitive reactions When a substance reacts simultaneously to give two different products, a parallel or competitive reaction is said to take place. - Two first order reactions: <math> A \rightarrow \; B </math> and <math> A \rightarrow \; C </math>, with constants <math> k_1</math> and <math> k_2</math> and rate equations <math>-\frac{d[A]}{dt}=(k_1+k_2)[A]</math>, <math> \frac{d[B]}{dt}=k_1[A]</math> and <math> \frac{d[C]}{dt}=k_2[A]</math> The integrated rate equations are then <math>\ [A] = [A]_0 e^{-(k_1+k_2)t}</math>; <math>[B] = \frac{k_1}{k_1+k_2}[A]_0 (1-e^{-(k_1+k_2)t})</math> and <math>[C] = \frac{k_2}{k_1+k_2}[A]_0 (1-e^{-(k_1+k_2)t})</math>. One important relationship in this case is <math> \frac{[B]}{[C]}=\frac{k_1}{k_2}</math> - One first order and one second order reaction:[4] This can be the case when studying a bimolecular reaction and a simultaneous hydrolysis (which can be treated as pseudo order one) takes place: the hydrolysis complicates the study of the reaction kinetics, because some reactant is being "spent" in a parallel reaction. For example A reacts with R to give our product C, but meanwhile the hydrolysis reaction takes away an amount of A to give B, a byproduct: <math> A + H_2O \rightarrow \ B </math> and <math> A + R \rightarrow \ C </math>. The rate equations are: <math> \frac{d[B]}{dt}=k_1[A][H_2O]=k_1'[A]</math> and <math> \frac{d[C]}{dt}=k_2[A][R]</math>. Where <math>k_1'</math> is the pseudo first order constant. The integrated rate equation for the main product [C] is <math> [C]=[R]_0 \left [ 1-e^{-\frac{k_2}{k_1'}[A]_0(1-e^{-k_1't})} \right ] </math>, which is equivalent to <math> ln \frac{[R]_0}{[R]_0-[C]}=\frac{k_2[A]_0}{k_1'}(1-e^{-k_1't})</math>. Concentration of B is related to that of C through <math> [B]=-\frac{k_1'}{k_2} ln \left ( 1 - \frac{[C]}{[R]_0} \right )</math> The integrated equations were analytically obtained but during the process it was assumed that <math>[A]_0-[C]\approx \;[A]_0</math> therefeore, previous equation for [C] can only be used for low concentrations of [C] compared to [A]0
https://www.wikidoc.org/index.php/Rate_equation
797b0dd3b83109de9e4156a5c7ab581a7ea62cce
wikidoc
Reading frame
Reading frame In biology, a reading frame is a contiguous and non-overlapping set of three-nucleotide codons in DNA or RNA. There are 3 possible reading frames in a mRNA strand and six in a double stranded DNA molecule due to the two strands from which transcription is possible. This leads to the possibility of overlapping genes and there may be many of these in bacteria. Some viruses e.g. HBV and BYDV use several overlapping genes in different reading frames. In rare cases a translating ribosome may shift from one frame to another, a translational frameshift. It is distinct from a frameshift mutation as the nucleotide sequence (DNA or RNA) is not altered only the frame in which it is read. A reading frame that contains a start codon and a stop codon is called an open reading frame (ORF).
Reading frame In biology, a reading frame is a contiguous and non-overlapping set of three-nucleotide codons in DNA or RNA. There are 3 possible reading frames in a mRNA strand and six in a double stranded DNA molecule due to the two strands from which transcription is possible. This leads to the possibility of overlapping genes and there may be many of these in bacteria.[1] Some viruses e.g. HBV and BYDV use several overlapping genes in different reading frames. In rare cases a translating ribosome may shift from one frame to another, a translational frameshift. It is distinct from a frameshift mutation as the nucleotide sequence (DNA or RNA) is not altered only the frame in which it is read. A reading frame that contains a start codon and a stop codon is called an open reading frame (ORF).
https://www.wikidoc.org/index.php/Reading_frame
80b71f02a928cf849ad7cacfa0afe582d6c32f6d
wikidoc
RecQ helicase
RecQ helicase RecQ helicase is a family of helicase enzymes initially found in Escherichia coli that has been shown to be important in genome maintenance. They function through catalyzing the reaction ATP + H2O → ADP + P and thus driving the unwinding of paired DNA and translocating in the 3' to 5' direction. These enzymes can also drive the reaction NTP + H2O → NDP + P to drive the unwinding of either DNA or RNA. # Function In prokaryotes RecQ is necessary for plasmid recombination and DNA repair from UV-light, free radicals, and alkylating agents. This protein can also reverse damage from replication errors. In eukaryotes, replication does not proceed normally in the absence of RecQ proteins, which also function in aging, silencing, recombination and DNA repair. # Structure RecQ family members share three regions of conserved protein sequence referred to as the: - N-terminal – Helicase - middle – RecQ-conserved (RecQ-Ct) and - C-terminal – Helicase-and-RNase-D C-terminal (HRDC) domains. The removal of the N-terminal residues (Helicase and, RecQ-Ct domains) impairs both helicase and ATPase activity but has no effect on the binding ability of RecQ implying that the N-terminus functions as the catalytic end. Truncations of the C-terminus (HRDC domain) compromise the binding ability of RecQ but not the catalytic function. The importance of RecQ in cellular functions is exemplified by human diseases, which all lead to genomic instability and a predisposition to cancer. # Clinical significance There are at least five human RecQ genes; and mutations in three human RecQ genes are implicated in heritable human diseases: WRN gene in Werner syndrome (WS), BLM gene in Bloom syndrome (BS), and RECQ4 in Rothmund-Thomson syndrome. These syndromes are characterized by premature aging, and can give rise to the diseases of cancer, type 2 diabetes, osteoporosis, and atherosclerosis, which are commonly found in old age. These diseases are associated with high incidence of chromosomal abnormalities, including chromosome breaks, complex rearrangements, deletions and translocations, site specific mutations, and in particular sister chromatid exchanges (more common in BS) that are believed to be caused by a high level of somatic recombination. # Mechanism The proper function of RecQ helicases requires the specific interaction with topoisomerase III (Top 3). Top 3 changes the topological status of DNA by binding and cleaving single stranded DNA and passing either a single stranded or a double stranded DNA segment through the transient break and finally religating the break. The interaction of RecQ helicase with topoisomerase III at the N-terminal region is involved in the suppression of spontaneous and damage induced recombination and the absence of this interaction results in a lethal or very severe phenotype. The emerging picture clearly is that RecQ helicases in concert with Top 3 are involved in maintaining genomic stability and integrity by controlling recombination events, and repairing DNA damage in the G2-phase of the cell cycle. The importance of RecQ for genomic integrity is exemplified by the diseases that arise as a consequence of mutations or malfunctions in RecQ helicases; thus it is crucial that RecQ is present and functional to ensure proper human growth and development. ## WRN helicase The Werner syndrome ATP-dependent helicase (WRN helicase) is unusual among RecQ DNA family helicases in having an additional exonuclease activity. WRN interacts with DNA-PKcs and the Ku protein complex. This observation, combined with evidence that WRN deficient cells produce extensive deletions at sites of joining of non-homologous DNA ends, suggests a role for WRN protein in the DNA repair process of non-homologous end joining (NHEJ). WRN also physically interacts with the major NHEJ factor X4L4 (XRCC4-DNA ligase 4 complex). X4L4 stimulates WRN exonuclease activity that likely facilitates DNA end processing prior to final ligation by X4L4. WRN also appears to play a role in resolving recombination intermediate structures during homologous recombinational repair (HRR) of DNA double-strand breaks. WRN participates in a complex with RAD51, RAD54, RAD54B and ATR proteins in carrying out the recombination step during inter-strand DNA cross-link repair. Evidence was presented that WRN plays a direct role in the repair of methylation induced DNA damage. The process likely involves the helicase and exonuclease activities of WRN that operate together with DNA polymerase beta in long patch base excision repair. WRN was found to have a specific role in preventing or repairing DNA damages resulting from chronic oxidative stress, particularly in slowly replicating cells. This finding suggested that WRN may be important in dealing with oxidative DNA damages that underlie normal aging (see DNA damage theory of aging). ## BLM helicase Cells from humans with Bloom syndrome are sensitive to DNA damaging agents such as UV and methyl methanesulfonate indicating deficient DNA repair capability. The budding yeast Saccharomyces cerevisiae encodes an ortholog of the Bloom syndrome (BLM) protein that is designated Sgs1 (Small growth suppressor 1). Sgs1(BLM) is a helicase that functions in homologous recombinational repair of DNA double-strand breaks. The Sgs1(BLM) helicase appears to be a central regulator of most of the recombination events that occur during S. cerevisiae meiosis. During normal meiosis Sgs1(BLM) is responsible for directing recombination towards the alternate formation of either early non-crossovers or Holliday junction joint molecules, the latter being subsequently resolved as crossovers. In the plant Arabidopsis thaliana, homologs of the Sgs1(BLM) helicase act as major barriers to meiotic crossover formation. These helicases are thought to displace the invading strand allowing its annealing with the other 3’overhang end of the double-strand break, leading to non-crossover recombinant formation by a process called synthesis-dependent strand annealing (SDSA) (see Wikipedia article “Genetic recombination”). It is estimated that only about 5% of double-strand breaks are repaired by crossover recombination. Sequela-Arnaud et al. suggested that crossover numbers are restricted because of the long-term costs of crossover recombination, that is, the breaking up of favorable genetic combinations of alleles built up by past natural selection. ## RECQL4 helicase In humans, individuals with Rothmund-Thomson syndrome, and carrying the RECQL4 germline mutation, have several clinical features of accelerated aging. These features include atrophic skin and pigment changes, alopecia, osteopenia, cataracts and an increased incidence of cancer. RECQL4 mutant mice also show features of accelerated aging. RECQL4 has a crucial role in DNA end resection that is the initial step required for homologous recombination (HR)-dependent double-strand break repair. When RECQL4 is depleted, HR-mediated repair and 5’ end resection are severely reduced in vivo. RECQL4 also appears to be necessary for other forms of DNA repair including non-homologous end joining, nucleotide excision repair and base excision repair. The association of deficient RECQL4 mediated DNA repair with accelerated aging is consistent with the DNA damage theory of aging.
RecQ helicase RecQ helicase is a family of helicase enzymes initially found in Escherichia coli[1] that has been shown to be important in genome maintenance.[2][3][4] They function through catalyzing the reaction ATP + H2O → ADP + P and thus driving the unwinding of paired DNA and translocating in the 3' to 5' direction. These enzymes can also drive the reaction NTP + H2O → NDP + P to drive the unwinding of either DNA or RNA. # Function In prokaryotes RecQ is necessary for plasmid recombination and DNA repair from UV-light, free radicals, and alkylating agents. This protein can also reverse damage from replication errors. In eukaryotes, replication does not proceed normally in the absence of RecQ proteins, which also function in aging, silencing, recombination and DNA repair. # Structure RecQ family members share three regions of conserved protein sequence referred to as the: - N-terminal – Helicase - middle – RecQ-conserved (RecQ-Ct) and - C-terminal – Helicase-and-RNase-D C-terminal (HRDC) domains. The removal of the N-terminal residues (Helicase and, RecQ-Ct domains) impairs both helicase and ATPase activity but has no effect on the binding ability of RecQ implying that the N-terminus functions as the catalytic end. Truncations of the C-terminus (HRDC domain) compromise the binding ability of RecQ but not the catalytic function. The importance of RecQ in cellular functions is exemplified by human diseases, which all lead to genomic instability and a predisposition to cancer. # Clinical significance There are at least five human RecQ genes; and mutations in three human RecQ genes are implicated in heritable human diseases: WRN gene in Werner syndrome (WS), BLM gene in Bloom syndrome (BS), and RECQ4 in Rothmund-Thomson syndrome.[5] These syndromes are characterized by premature aging, and can give rise to the diseases of cancer, type 2 diabetes, osteoporosis, and atherosclerosis, which are commonly found in old age. These diseases are associated with high incidence of chromosomal abnormalities, including chromosome breaks, complex rearrangements, deletions and translocations, site specific mutations, and in particular sister chromatid exchanges (more common in BS) that are believed to be caused by a high level of somatic recombination. # Mechanism The proper function of RecQ helicases requires the specific interaction with topoisomerase III (Top 3). Top 3 changes the topological status of DNA by binding and cleaving single stranded DNA and passing either a single stranded or a double stranded DNA segment through the transient break and finally religating the break. The interaction of RecQ helicase with topoisomerase III at the N-terminal region is involved in the suppression of spontaneous and damage induced recombination and the absence of this interaction results in a lethal or very severe phenotype. The emerging picture clearly is that RecQ helicases in concert with Top 3 are involved in maintaining genomic stability and integrity by controlling recombination events, and repairing DNA damage in the G2-phase of the cell cycle. The importance of RecQ for genomic integrity is exemplified by the diseases that arise as a consequence of mutations or malfunctions in RecQ helicases; thus it is crucial that RecQ is present and functional to ensure proper human growth and development. ## WRN helicase The Werner syndrome ATP-dependent helicase (WRN helicase) is unusual among RecQ DNA family helicases in having an additional exonuclease activity. WRN interacts with DNA-PKcs and the Ku protein complex. This observation, combined with evidence that WRN deficient cells produce extensive deletions at sites of joining of non-homologous DNA ends, suggests a role for WRN protein in the DNA repair process of non-homologous end joining (NHEJ).[6] WRN also physically interacts with the major NHEJ factor X4L4 (XRCC4-DNA ligase 4 complex).[7] X4L4 stimulates WRN exonuclease activity that likely facilitates DNA end processing prior to final ligation by X4L4.[7] WRN also appears to play a role in resolving recombination intermediate structures during homologous recombinational repair (HRR) of DNA double-strand breaks.[6] WRN participates in a complex with RAD51, RAD54, RAD54B and ATR proteins in carrying out the recombination step during inter-strand DNA cross-link repair.[8] Evidence was presented that WRN plays a direct role in the repair of methylation induced DNA damage. The process likely involves the helicase and exonuclease activities of WRN that operate together with DNA polymerase beta in long patch base excision repair.[9] WRN was found to have a specific role in preventing or repairing DNA damages resulting from chronic oxidative stress, particularly in slowly replicating cells.[10] This finding suggested that WRN may be important in dealing with oxidative DNA damages that underlie normal aging[10] (see DNA damage theory of aging). ## BLM helicase Cells from humans with Bloom syndrome are sensitive to DNA damaging agents such as UV and methyl methanesulfonate[11] indicating deficient DNA repair capability. The budding yeast Saccharomyces cerevisiae encodes an ortholog of the Bloom syndrome (BLM) protein that is designated Sgs1 (Small growth suppressor 1). Sgs1(BLM) is a helicase that functions in homologous recombinational repair of DNA double-strand breaks. The Sgs1(BLM) helicase appears to be a central regulator of most of the recombination events that occur during S. cerevisiae meiosis.[12] During normal meiosis Sgs1(BLM) is responsible for directing recombination towards the alternate formation of either early non-crossovers or Holliday junction joint molecules, the latter being subsequently resolved as crossovers.[12] In the plant Arabidopsis thaliana, homologs of the Sgs1(BLM) helicase act as major barriers to meiotic crossover formation.[13] These helicases are thought to displace the invading strand allowing its annealing with the other 3’overhang end of the double-strand break, leading to non-crossover recombinant formation by a process called synthesis-dependent strand annealing (SDSA) (see Wikipedia article “Genetic recombination”). It is estimated that only about 5% of double-strand breaks are repaired by crossover recombination. Sequela-Arnaud et al.[13] suggested that crossover numbers are restricted because of the long-term costs of crossover recombination, that is, the breaking up of favorable genetic combinations of alleles built up by past natural selection. ## RECQL4 helicase In humans, individuals with Rothmund-Thomson syndrome, and carrying the RECQL4 germline mutation, have several clinical features of accelerated aging. These features include atrophic skin and pigment changes, alopecia, osteopenia, cataracts and an increased incidence of cancer.[14] RECQL4 mutant mice also show features of accelerated aging.[15] RECQL4 has a crucial role in DNA end resection that is the initial step required for homologous recombination (HR)-dependent double-strand break repair.[16] When RECQL4 is depleted, HR-mediated repair and 5’ end resection are severely reduced in vivo. RECQL4 also appears to be necessary for other forms of DNA repair including non-homologous end joining, nucleotide excision repair and base excision repair.[14] The association of deficient RECQL4 mediated DNA repair with accelerated aging is consistent with the DNA damage theory of aging.
https://www.wikidoc.org/index.php/RecQ_helicase
0176b7912f64369d9f07f3e278a12a476bd447de
wikidoc
Rectus sheath
Rectus sheath The Rectus sheath is formed by the aponeuroses of the Obliqui and Transversus. It contains the Rectus abdominis and Pyramidalis muscles. It can be divided into anterior and posterior laminae. The arrangement of the layers has important variations at different locations in the body. # Below the costal margin For context, above the sheath are the following three layers: - superficial fascia - Camper's fascia - Scarpa's fascia Within the sheath, the layers vary: Below the sheath are the following three layers: - transversalis fascia - extraperitoneal fat - parietal peritoneum The Rectus, in the situation where its sheath is deficient below, is separated from the peritoneum only by the transversalis fascia, in contrast to the upper layers, where part of the internal oblique also runs beneath the rectus. Because of the thinner layers below, this region is more susceptible to herniation. # Above the costal margin Since the tendons of the Obliquus internus and Transversus only reach as high as the costal margin, it follows that above this level the sheath of the Rectus is deficient behind, the muscle resting directly on the cartilages of the ribs, and being covered merely by the tendon of the Obliquus externus. # Additional images - The Cremaster - The interfoveolar ligament, seen from in front.
Rectus sheath Template:Infobox Anatomy The Rectus sheath is formed by the aponeuroses of the Obliqui and Transversus. It contains the Rectus abdominis and Pyramidalis muscles. It can be divided into anterior[1] and posterior[2] laminae. The arrangement of the layers has important variations at different locations in the body. # Below the costal margin For context, above the sheath are the following three layers: - superficial fascia - Camper's fascia - Scarpa's fascia Within the sheath, the layers vary: Below the sheath are the following three layers: - transversalis fascia - extraperitoneal fat - parietal peritoneum The Rectus, in the situation where its sheath is deficient below, is separated from the peritoneum only by the transversalis fascia, in contrast to the upper layers, where part of the internal oblique also runs beneath the rectus. Because of the thinner layers below, this region is more susceptible to herniation. # Above the costal margin Since the tendons of the Obliquus internus and Transversus only reach as high as the costal margin, it follows that above this level the sheath of the Rectus is deficient behind, the muscle resting directly on the cartilages of the ribs, and being covered merely by the tendon of the Obliquus externus. # Additional images - The Cremaster - The interfoveolar ligament, seen from in front.
https://www.wikidoc.org/index.php/Rectus_sheath
08bd7288008adaeac52ace68d99951e77abfb594
wikidoc
Reed Elsevier
Reed Elsevier Reed Elsevier is a global publisher and information provider. It came into being fall 1992 as the result of a merger between Reed International, a British trade book and magazine publisher, and the Dutch science publisher Elsevier NV, forming the Reed Elsevier group, a dual-listed company consisting of Reed Elsevier PLC and Reed Elsevier NV. It is listed on several of the world's major stock exchanges. # History ## Reed International In 1894, Albert E. Reed established a newsprint manufacture at Tovil Mill near Maidstone, Kent. In 1903, Albert E Reed was registered as a public company. In 1970, the company name was changed to Reed International Limited. The company originally grew by merging with other publishers and produced high quality trade journals as IPC Business Press Ltd and women's and other consumer magazines as IPC magazines Ltd. For a time the company published The Daily Mirror. The original family owners the Reeds were Methodists and encouraged good working conditions for their staff in the then dangerous print trade. They became known also for paying their staff well, and avoiding casual labour practices. The company however in modern times took full advantage of changing attitudes in the 1980s and was associated in job cutting exercises throughout its magazine empire, following union de-recognition in the 1990s (union recognition has since been regained in several business units). ## Elsevier NV In 1880, Jacobus George Robbers started a publishing company called NV Uitgeversmaatschappij Elsevier (Elsevier Publishing Company NV) to publish literary classics and the encyclopedia Winkler Prins. Robbers named the company after the old Dutch printers family Elzevir, which, for example, published the works of Erasmus in 1587. Elsevier NV originally was based in Rotterdam but moved to Amsterdam in the late 1880s. Up to the 1930s, Elsevier remained a small family-owned publisher, with no more than ten employees. After the war it launched the weekly Elseviers Weekblad), which turned out to be very profitable. A rapid expansion followed. Elsevier Press Inc. started in 1951 in Houston, Texas, and in 1962 publishing offices were opened in London and New York. Multiple mergers in the 1970s led to name changes, settling at Elsevier Scientific Publishers in 1979. Two years before the merger with Reed, Elsevier acquired Pergamon Press in the UK. # Company divisions Reed Elsevier conducts its business through the following divisions: - The science and medical publishing division is Elsevier. - The legal publishing division is LexisNexis. - The education division, Harcourt Education, is being sold to Houghton Mifflin. - The business division is Reed Business Information # Key products ScienceDirect contains over 25% of the world's science, technology and medicine full text and bibliographic information. Scopus is the world's largest abstract and citation database of research literature and quality web sources. Scopus is updated daily. Reed Business, Reed Elsevier's global Business division, is a provider of magazines, exhibitions, directories, online media and marketing services across five continents. Its prestige brands serve professionals across a diverse range of industries. These brands include Variety, New Scientist, totaljobs.com, Elsevier, Kellysearch, and the World Travel & Tourism Market. In February 2007, Reed Elsevier announced its intention to sell Harcourt, its educational publishing division. On 4th May 2007 Pearson, the international education and information company, announced that it had agreed to acquire Harcourt Assessment and Harcourt Education International from Reed Elsevier for $950m in cash. In July 2007, Reed Elsevier announced its agreement to sell the remaining Harcourt Education business, including international imprint Heinemann, to Houghton Mifflin Riverdeep Group for $4b in cash and stock. # Pricing issues Reed Elsevier has been criticised for the high prices of its journals and services, especially Elsevier and LexisNexis. Members of the scientific community have called for a boycott of Elsevier journals and a move to open access publications such as those of the Public Library of Science or BioMed Central. # Defense Exhibitions Members of the medical and scientific communities, which purchase and use many journals published by Reed Elsevier, have agitated for the company to cut its links to the arms trade. Two UK academics, Dr. Tom Stafford of Sheffield University and Dr Nick Gill, have launched petitions calling on Reed Elsevier to stop organising arms fairs. . A subsidiary, Spearhead, organizes defence shows, including a recent event where it was reported that cluster bombs and extremely powerful riot control equipment were offered for sale. In February 2007, Richard Smith, former editor of the British Medical Journal, published an editorial in the Journal of the Royal Society of Medicine, arguing that Reed Elsevier's involvement in both the arms trade and medical publishing constituted a conflict of interest. He suggested that if academics began to disengage with Reed Elsevier, the company would be likely to end their arms fairs, as arms fairs only comprise a small proportion of their business. On June 1, 2007, Reed Elsevier announced that they would be exiting the Defense Exhibition business during the second half of 2007. This means that the company will no longer organise arms fairs around the world. The decision follows a high-profile campaign, coordinated by CAAT, which highlighted the incompatibility of Reed's involvement in the arms trade and their position as the number one publisher of medical and science journals and other publications. CAAT welcomes the decision and applauds the board of Reed Elsevier for recognising the concerns of its stakeholders.
Reed Elsevier Template:Infobox Company Reed Elsevier is a global publisher and information provider. It came into being fall 1992 as the result of a merger between Reed International, a British trade book and magazine publisher, and the Dutch science publisher Elsevier NV,[1] forming the Reed Elsevier group, a dual-listed company consisting of Reed Elsevier PLC and Reed Elsevier NV.[2] It is listed on several of the world's major stock exchanges.[3] # History ## Reed International In 1894, Albert E. Reed established a newsprint manufacture at Tovil Mill near Maidstone, Kent. In 1903, Albert E Reed was registered as a public company. In 1970, the company name was changed to Reed International Limited. The company originally grew by merging with other publishers and produced high quality trade journals as IPC Business Press Ltd and women's and other consumer magazines as IPC magazines Ltd. For a time the company published The Daily Mirror. The original family owners the Reeds were Methodists and encouraged good working conditions for their staff in the then dangerous print trade. They became known also for paying their staff well, and avoiding casual labour practices. The company however in modern times took full advantage of changing attitudes in the 1980s and was associated in job cutting exercises throughout its magazine empire, following union de-recognition in the 1990s (union recognition has since been regained in several business units). ## Elsevier NV In 1880, Jacobus George Robbers started a publishing company called NV Uitgeversmaatschappij Elsevier (Elsevier Publishing Company NV) to publish literary classics and the encyclopedia Winkler Prins. Robbers named the company after the old Dutch printers family Elzevir, which, for example, published the works of Erasmus in 1587. Elsevier NV originally was based in Rotterdam but moved to Amsterdam in the late 1880s. Up to the 1930s, Elsevier remained a small family-owned publisher, with no more than ten employees. After the war it launched the weekly Elseviers Weekblad), which turned out to be very profitable. A rapid expansion followed. Elsevier Press Inc. started in 1951 in Houston, Texas, and in 1962 publishing offices were opened in London and New York. Multiple mergers in the 1970s led to name changes, settling at Elsevier Scientific Publishers in 1979. Two years before the merger with Reed, Elsevier acquired Pergamon Press in the UK. # Company divisions Reed Elsevier conducts its business through the following divisions: - The science and medical publishing division is Elsevier. - The legal publishing division is LexisNexis. - The education division, Harcourt Education, is being sold to Houghton Mifflin[4]. - The business division is Reed Business Information # Key products ScienceDirect contains over 25% of the world's science, technology and medicine full text and bibliographic information. Scopus is the world's largest abstract and citation database of research literature and quality web sources. Scopus is updated daily. Reed Business, Reed Elsevier's global Business division, is a provider of magazines, exhibitions, directories, online media and marketing services across five continents. Its prestige brands serve professionals across a diverse range of industries. These brands include Variety, New Scientist, totaljobs.com, Elsevier, Kellysearch, and the World Travel & Tourism Market. In February 2007, Reed Elsevier announced its intention to sell Harcourt, its educational publishing division.[5] On 4th May 2007 Pearson, the international education and information company, announced that it had agreed to acquire Harcourt Assessment and Harcourt Education International from Reed Elsevier for $950m in cash.[6] In July 2007, Reed Elsevier announced its agreement to sell the remaining Harcourt Education business, including international imprint Heinemann, to Houghton Mifflin Riverdeep Group for $4b in cash and stock. [7] # Pricing issues Reed Elsevier has been criticised for the high prices of its journals and services, especially Elsevier and LexisNexis. Members of the scientific community have called for a boycott of Elsevier journals and a move to open access publications such as those of the Public Library of Science or BioMed Central.[8] # Defense Exhibitions Members of the medical and scientific communities, which purchase and use many journals published by Reed Elsevier, have agitated for the company to cut its links to the arms trade. Two UK academics, Dr. Tom Stafford of Sheffield University and Dr Nick Gill, have launched petitions calling on Reed Elsevier to stop organising arms fairs. [1][2]. A subsidiary, Spearhead, organizes defence shows, including a recent event where it was reported that cluster bombs and extremely powerful riot control equipment were offered for sale.[9][10] In February 2007, Richard Smith, former editor of the British Medical Journal, published an editorial in the Journal of the Royal Society of Medicine, arguing that Reed Elsevier's involvement in both the arms trade and medical publishing constituted a conflict of interest.[11] He suggested that if academics began to disengage with Reed Elsevier, the company would be likely to end their arms fairs, as arms fairs only comprise a small proportion of their business. On June 1, 2007, Reed Elsevier announced that they would be exiting the Defense Exhibition business during the second half of 2007.[12] This means that the company will no longer organise arms fairs around the world. The decision follows a high-profile campaign, coordinated by CAAT, which highlighted the incompatibility of Reed's involvement in the arms trade and their position as the number one publisher of medical and science journals and other publications. CAAT welcomes the decision and applauds the board of Reed Elsevier for recognising the concerns of its stakeholders.[13]
https://www.wikidoc.org/index.php/Reed_Elsevier
d146f2e4753e76da0a9ef1788cb71491dd68abea
wikidoc
Referred pain
Referred pain # Overview Referred pain is a very unpleasant sensation localized to an area separate from the site of the causative injury or other painful stimulation. Often, referred pain arises when a nerve is compressed or damaged at or near its origin. In this circumstance, the sensation of pain will generally be felt in the territory that the nerve serves, even though the damage originates elsewhere. # Examples A common example is spinal disc herniation, in which a nerve root arising from the spinal cord is compressed by adjacent disc material. Although pain may arise from the damaged disc itself, pain and/or other symptoms will also be felt in the region served by the compressed nerve (for example, the thigh, knee, or foot). Relieving the pressure on the nerve root may ameliorate the referred pain, provided that permanent nerve damage has not occurred. A similar mechanism may be responsible for some instances of the phantom limb syndrome in amputees. In another classic example of referred pain, male patients who are suffering a myocardial infarction (heart attack) feel pain in their left arm. Another example of referred pain is the common "ice cream headache" or "brain freeze" which happens when the trigeminal ganglion is indirectly stimulated from cold food on the roof of the mouth. Another example is pain from an inflamed gall bladder which may refer pain to the right shoulder and pain from a herniated cervical disc referring pain down one or both arms into the hands. In addition, tooth pain may refer pain that should be localized to the affected tooth to the opposite side of the mouth as opposed to actually feeling pain in the tooth with the cavity or abscess. # Pathophysiology In cases of damage to viscera, referred pain may be due to convergence of visceral nerves that innervate the damaged organs with somatic nerves that innervate sections of skin. Because a neuron from the organ and one from the skin may form a synapse with the same projection neuron in the dorsal horn, input from either neuron will be interpreted the same way by it and all neurons further up the pathway. Since the brain is more "accustomed" to receiving sensation from the peripheral structure than from the viscera, it may interpret the pain as originating from the former. Thus there is an array of diseases that cause damage to organs and which produce characteristic patterns of pain in unrelated places in the body's periphery. Despite the proliferation of literature on the mechanisms of referred pain, it is a process that is still not well understood. Interestingly enough, there also seems to be some sort of pattern to referred pain in the symptoms associated with various disorders, for instance, many people are familiar with a physical symptom being associated with emotional distress, nausea, headaches, etc. but, there is a very minor chance that a person with a heart condition will have a tooth ache and no other obvious symptoms at all. Other examples include joint pain associated with a kidney infection or a digestive disorder being felt in a headache. It is not fully understood why these symptoms occur the way they do. # Theories of referred pain 1. Covergence theory- The nerves from the visceral structures and the somatic structures to which pain is referred enter the CNS at the same level and converge on the same spinothalmic neurons. Since somatic pain is far more common than visceral pain, therefore when the same afferent pathway is stimulated by signals that originate in visceral afferent nerves, the signal that reaches the somatosensory cortex is identical and is interpreted as having arisen within the somatic area. 2.Facilitation theory- The afferent impulses from visceral structures produce subliminal fringe effects that lower the exctability threshold of spinothalamic neurons which recieve afferent fibres from somatic areas. Therefore, any slight activity in the pathways transmitting pain impulses from somatic regions, and which normally would die out within the spinal cord, is facilitated thus reaches conscious levels.
Referred pain Editor-In-Chief: C. Michael Gibson, M.S., M.D. [1] Assistant Editor-in-Chief: Soumya Sachdeva, # Overview Referred pain is a very unpleasant sensation localized to an area separate from the site of the causative injury or other painful stimulation. Often, referred pain arises when a nerve is compressed or damaged at or near its origin. In this circumstance, the sensation of pain will generally be felt in the territory that the nerve serves, even though the damage originates elsewhere. [1] # Examples A common example is spinal disc herniation, in which a nerve root arising from the spinal cord is compressed by adjacent disc material. Although pain may arise from the damaged disc itself, pain and/or other symptoms will also be felt in the region served by the compressed nerve (for example, the thigh, knee, or foot). Relieving the pressure on the nerve root may ameliorate the referred pain, provided that permanent nerve damage has not occurred. A similar mechanism may be responsible for some instances of the phantom limb syndrome in amputees. In another classic example of referred pain, male patients who are suffering a myocardial infarction (heart attack) feel pain in their left arm. Another example of referred pain is the common "ice cream headache" or "brain freeze" which happens when the trigeminal ganglion is indirectly stimulated from cold food on the roof of the mouth. Another example is pain from an inflamed gall bladder which may refer pain to the right shoulder and pain from a herniated cervical disc referring pain down one or both arms into the hands. In addition, tooth pain may refer pain that should be localized to the affected tooth to the opposite side of the mouth as opposed to actually feeling pain in the tooth with the cavity or abscess. # Pathophysiology In cases of damage to viscera, referred pain may be due to convergence of visceral nerves that innervate the damaged organs with somatic nerves that innervate sections of skin. Because a neuron from the organ and one from the skin may form a synapse with the same projection neuron in the dorsal horn, input from either neuron will be interpreted the same way by it and all neurons further up the pathway. Since the brain is more "accustomed" to receiving sensation from the peripheral structure than from the viscera, it may interpret the pain as originating from the former. Thus there is an array of diseases that cause damage to organs and which produce characteristic patterns of pain in unrelated places in the body's periphery. Despite the proliferation of literature on the mechanisms of referred pain, it is a process that is still not well understood. Interestingly enough, there also seems to be some sort of pattern to referred pain in the symptoms associated with various disorders, for instance, many people are familiar with a physical symptom being associated with emotional distress, nausea, headaches, etc. but, there is a very minor chance that a person with a heart condition will have a tooth ache and no other obvious symptoms at all. Other examples include joint pain associated with a kidney infection or a digestive disorder being felt in a headache. It is not fully understood why these symptoms occur the way they do. # Theories of referred pain 1. Covergence theory- The nerves from the visceral structures and the somatic structures to which pain is referred enter the CNS at the same level and converge on the same spinothalmic neurons. Since somatic pain is far more common than visceral pain, therefore when the same afferent pathway is stimulated by signals that originate in visceral afferent nerves, the signal that reaches the somatosensory cortex is identical and is interpreted as having arisen within the somatic area. 2.Facilitation theory- The afferent impulses from visceral structures produce subliminal fringe effects that lower the exctability threshold of spinothalamic neurons which recieve afferent fibres from somatic areas. Therefore, any slight activity in the pathways transmitting pain impulses from somatic regions, and which normally would die out within the spinal cord, is facilitated thus reaches conscious levels. # External links - Referred+Pain at the US National Library of Medicine Medical Subject Headings (MeSH)
https://www.wikidoc.org/index.php/Referred_pain
558ea53011b772a74efb3bf8245ed635e77d4e0b
wikidoc
Reflex hammer
Reflex hammer A reflex hammer is a medical instrument used by physicians to test deep tendon reflexes. Testing for reflexes is an important part of the neurological physical examination in order to detect abnormalities in the central or peripheral nervous system. Reflex hammers can also be used for chest percussion. # Models of reflex hammer Prior to the development of specialized reflex hammers, hammers specific for percussion of the chest were used to elicit reflexes. However, this proved to be cumbersome, as the weight of the chest percussion hammer was insufficient to generate an adequate stimulus for a reflex. Starting in the late 19th century, several models of specific reflex hammers were created: - The Taylor or tomahawk reflex hammer was designed by John Madison Taylor in 1888 and is the most well known reflex hammer in the USA. It consists of a triangular rubber component which is attached to a flat metallic handle. - The Queen Square reflex hammer was designed for use at the National Hospital for Nervous Diseases (now the National Hospital for Neurology and Neurosurgery) in Queen Square, London. It was originally made with a bamboo or cane handle of varying length, of average 25 to 40 centimetres, attached to a 5 centimetre metal disk with a plastic bumper. The Queen Square hammer is also now made with plastic molds, and often has a sharp tapered end to allow for testing of plantar reflexes. It is the reflex hammer of choice of the UK neurologist. - The Babinski reflex hammer was designed by Joseph Babiński in 1912 and is similar to the Queen Square hammer, except that it has a metallic handle that is often detachable. Babinski hammers can also be telescoping, allowing for compact storage. Babinski's hammer was popularized in clinical use in America by the neurologist Abraham Rabiner, who was given the instrument as a peace offering by Babinski after the two brawled at a black tie affair in Vienna. - Other reflex hammer types include the Trömner, Buck, Berliner and Stookey reflex hammers. - A Taylor, or tomahawk reflex hammer. (Image courtesy of Charlie Goldberg, M.D., UCSD School of Medicine and VA Medical Center, San Diego, California) - Another type of reflex hammers. (Image courtesy of Charlie Goldberg, M.D., UCSD School of Medicine and VA Medical Center, San Diego, California) - A large hammer; Head oriented horizontally. (Image courtesy of Charlie Goldberg, M.D., UCSD School of Medicine and VA Medical Center, San Diego, California) - A large hammer; Head oriented vertically. (Image courtesy of Charlie Goldberg, M.D., UCSD School of Medicine and VA Medical Center, San Diego, California) - The Queen Square reflex hammer, shown with a plastic handle and a tip that tapers to allow for plantar reflex testing # Method of use The strength of a reflex is used to gauge central and peripheral nervous system disorders, with the former resulting in hyperreflexia, or exaggerated reflexes, and the latter resulting in hyporeflexia or diminished reflexes. However, the strength of the stimulus used to extract the reflex also affects the magnitude of the reflex. Attempts have been made to determine the force required to elicit a reflex, but vary depending on the hammer used, and are difficult to quantify. The Taylor hammer is usually held at the end by the physician, and the entire device is swung in an arc-like motion onto the tendon in question. The Queen Square and Babinski hammers are usually held perpendicular to the tendon in question, and are passively swung with gravity assistance onto the tendon. The Jendrassik maneuver, which entails interlocking of flexed fingers to distract a patient, can also be used to accentuate reflexes. In cases of hyperreflexia, the physician may place his finger on top of the tendon, and tap the finger with the hammer. Sometimes a reflex hammer may not be necessary to elicit hyperreflexia, with finger tapping over the tendon being sufficient as a stimulus.
Reflex hammer A reflex hammer is a medical instrument used by physicians to test deep tendon reflexes. Testing for reflexes is an important part of the neurological physical examination in order to detect abnormalities in the central or peripheral nervous system. Reflex hammers can also be used for chest percussion.[1] # Models of reflex hammer Prior to the development of specialized reflex hammers, hammers specific for percussion of the chest were used to elicit reflexes.[2] However, this proved to be cumbersome, as the weight of the chest percussion hammer was insufficient to generate an adequate stimulus for a reflex. Starting in the late 19th century, several models of specific reflex hammers were created: - The Taylor or tomahawk reflex hammer was designed by John Madison Taylor in 1888 [3] and is the most well known reflex hammer in the USA. It consists of a triangular rubber component which is attached to a flat metallic handle. - The Queen Square reflex hammer was designed for use at the National Hospital for Nervous Diseases (now the National Hospital for Neurology and Neurosurgery) in Queen Square, London. It was originally made with a bamboo or cane handle of varying length, of average 25 to 40 centimetres, attached to a 5 centimetre metal disk with a plastic bumper.[4] The Queen Square hammer is also now made with plastic molds, and often has a sharp tapered end to allow for testing of plantar reflexes. It is the reflex hammer of choice of the UK neurologist. - The Babinski reflex hammer was designed by Joseph Babiński in 1912[2] and is similar to the Queen Square hammer, except that it has a metallic handle that is often detachable.[5] Babinski hammers can also be telescoping, allowing for compact storage. Babinski's hammer was popularized in clinical use in America by the neurologist Abraham Rabiner, who was given the instrument as a peace offering by Babinski after the two brawled at a black tie affair in Vienna.[2] - Other reflex hammer types include the Trömner, Buck, Berliner and Stookey reflex hammers.[2] - A Taylor, or tomahawk reflex hammer. (Image courtesy of Charlie Goldberg, M.D., UCSD School of Medicine and VA Medical Center, San Diego, California) - Another type of reflex hammers. (Image courtesy of Charlie Goldberg, M.D., UCSD School of Medicine and VA Medical Center, San Diego, California) - A large hammer; Head oriented horizontally. (Image courtesy of Charlie Goldberg, M.D., UCSD School of Medicine and VA Medical Center, San Diego, California) - A large hammer; Head oriented vertically. (Image courtesy of Charlie Goldberg, M.D., UCSD School of Medicine and VA Medical Center, San Diego, California) - The Queen Square reflex hammer, shown with a plastic handle and a tip that tapers to allow for plantar reflex testing # Method of use The strength of a reflex is used to gauge central and peripheral nervous system disorders, with the former resulting in hyperreflexia, or exaggerated reflexes, and the latter resulting in hyporeflexia or diminished reflexes. However, the strength of the stimulus used to extract the reflex also affects the magnitude of the reflex. Attempts have been made to determine the force required to elicit a reflex,[6] but vary depending on the hammer used, and are difficult to quantify. The Taylor hammer is usually held at the end by the physician, and the entire device is swung in an arc-like motion onto the tendon in question. The Queen Square and Babinski hammers are usually held perpendicular to the tendon in question, and are passively swung with gravity assistance onto the tendon.[1] The Jendrassik maneuver, which entails interlocking of flexed fingers to distract a patient, can also be used to accentuate reflexes.[7] In cases of hyperreflexia, the physician may place his finger on top of the tendon, and tap the finger with the hammer. Sometimes a reflex hammer may not be necessary to elicit hyperreflexia, with finger tapping over the tendon being sufficient as a stimulus.[1]
https://www.wikidoc.org/index.php/Reflex_hammer
eff082b6fb65851aeb1ef8f9d2836205b1f74234
wikidoc
Refrigeration
Refrigeration Refrigeration is the process of removing heat from an enclosed space, or from a substance, and rejecting it elsewhere for the primary purpose of lowering the temperature of the enclosed space or substance and then maintaining that lower temperature. The term cooling refers generally to any natural or artificial process by which heat is dissipated. The process of artificially producing extreme cold temperatures is referred to as cryogenics. Cold is the absence of heat, hence in order to decrease a temperature, one "removes heat", rather than "adding cold." In order to satisfy the Second Law of Thermodynamics, some form of work must be performed to accomplish this. This work is traditionally done by mechanical work but can also be done by magnetism, laser or other means. However, all refrigeration uses the three basic methods of heat transfer: convection, conduction, or radiation. # Historical applications ## Ice harvesting The use of ice to refrigerate and thus preserve food goes back to prehistoric times. Through the ages, the seasonal harvesting of snow and ice was a regular practice of most of the ancient cultures: Chinese, Hebrews, Greeks, Romans, Persians. Ice and snow were stored in caves or dugouts lined with straw or other insulating materials. The Persians stored ice in pits called yahairas. Rationing of the ice allowed the preservation of foods over the cold periods. This practice worked well down through the centuries, with icehouses remaining in use into the twentieth century. In the 16th century, the discovery of chemical refrigeration was one of the first steps toward artificial means of refrigeration. Sodium nitrate or potassium nitrate, when added to water, lowered the water temperature and created a sort of refrigeration bath for cooling substances. In Italy, such a solution was used to chill wine. During the first half of the 19th century, ice harvesting became big business in America. New Englander Frederic Tudor, who became known as the "Ice King", worked on developing better insulation products for the long distance shipment of ice, especially to the tropics. ## First refrigeration systems The first known method of artificial refrigeration was demonstrated by William Cullen at the University of Glasgow in Scotland in 1748. Cullen used a pump to create a partial vacuum over a container of diethyl ether, which then boiled , absorbing heat from the surrounding air. The experiment even created a small amount of ice, but had no practical application at that time. In 1805, American inventor Oliver Evans designed but never built a refrigeration system based on the vapor-compression refrigeration cycle rather than chemical solutions or volatile liquids such as ethyl ether. In 1820, the British scientist Michael Faraday liquefied ammonia and other gases by using high pressures and low temperatures. An American living in Great Britain, Jacob Perkins, obtained the first patent for a vapor-compression refrigeration system in 1834. Perkins built a prototype system and it actually worked, although it did not succeed commercially. In 1842, an American physician, John Gorrie, designed the first system for refrigerating water to produce ice. He also conceived the idea of using his refrigeration system to cool the air for comfort in homes and hospitals (i.e., air-conditioning). His system compressed air, then partially cooled the hot compressed air with water before allowing it to expand while doing part of the work required to drive the air compressor. That isentropic expansion cooled the air to a temperature low enough to freeze water and produce ice, or to flow "through a pipe for effecting refrigeration otherwise" as stated in his patent granted by the U.S. Patent Office in 1851. Gorrie built a working prototype, but his system was a commercial failure. Alexander Twining began experimenting with vapor-compression refrigeration in 1848 and obtained patents in 1850 and 1853. He is credited with having initiated commercial refrigeration in the United States by 1856. Meanwhile, James Harrison who was born in Scotland and subsequently emigrated to Australia, begun operation of a mechanical ice-making machine in 1851 on the banks of the Barwon River at Rocky Point in Geelong. His first commercial ice-making machine followed in 1854 and his patent for an ether liquid-vapour compression refrigeration system was granted in 1855. Harrison introduced commercial vapor-compression refrigeration to breweries and meat packing houses and by 1861, a dozen of his systems were in operation. Australian, Argentinean and American concerns experimented with refrigerated shipping in the mid 1870s, the first commercial success coming when William Soltau Davidson fitted a compression refrigeration unit to the New Zealand vessel Dunedin in 1882, leading to a meat and dairy boom in Australasia and South America. The first gas absorption refrigeration system using gaseous ammonia dissolved in water (referred to as "aqua ammonia") was developed by Ferdinand Carré of France in 1859 and patented in 1860. Due to the toxicity of ammonia, such systems were not developed for use in homes, but were used to manufacture ice for sale. In the United States, the consumer public at that time still used the ice box with ice brought in from commercial suppliers, many of whom were still harvesting ice and storing it in an icehouse. Thaddeus Lowe, an American balloonist from the Civil War, had experimented over the years with the properties of gases. One of his mainstay enterprises was the high-volume production of hydrogen gas. He also held several patents on ice making machines. His "Compression Ice Machine" would revolutionize the cold storage industry. In 1869 he and other investors purchased an old steamship onto which they loaded one of Lowe’s refrigeration units and began shipping fresh fruit from New York to the Gulf Coast area, and fresh meat from Galveston, Texas back to New York. Because of Lowe’s lack of knowledge about shipping, the business was a costly failure, and it was difficult for the public to get used to the idea of being able to consume meat that had been so long out of the packing house. Domestic mechanical refrigerators became available in the United States around 1911. ## Widespread commercial use By the 1870s breweries had become the largest users of commercial refrigeration units, though some still relied on harvested ice. Though the ice-harvesting industry had grown immensely by the turn of the 20th century, pollution and sewage had begun to creep into natural ice making it a problem in the metropolitan suburbs. Eventually breweries began to complain of tainted ice. This raised demand for more modern and consumer-ready refrigeration and ice-making machines. In 1895 German engineer Carl von Linde set up a large-scale process for the production of liquid air and eventually liquid oxygen for use in safe household refrigerators. Refrigerated railroad cars were introduced in the US in the 1840s for the short-run transportation of dairy products. In 1867 J.B. Sutherland of Detroit, Michigan patented the refrigerator car designed with ice tanks at either end of the car and ventilator flaps near the floor which would create a gravity draft of cold air through the car. By 1900 the meat packing houses of Chicago had adopted ammonia-cycle commercial refrigeration. By 1914 almost every location used artificial refrigeration. The big meat packers, Armour, Swift, and Wilson, had purchased the most expensive units which they installed on train cars and in branch houses and storage facilities in the more remote distribution areas. It was not until the middle of the 20th century that refrigeration units were designed for installation on tractor-trailer rigs (trucks or lorries). Refrigerated vehicles are used to transport perishable goods, such as frozen foods, fruit and vegetables, and temperature-sensitive chemicals. Most modern refrigerators keep the temperature between -40 and +20 °C and have a maximum payload of around 24 000 kg. gross weight (in Europe). ## Home and consumer use With the invention of synthetic refrigerations based mostly on a chlorofluorocarbon (CFC) chemical, safer refrigerators were possible for home and consumer use. Freon is a trademark of the Dupont Corporation and refers to these CFC, and later hydrochlorofluorocarbon (HCFC) and hydrofluorocarbon (HFC), refrigerants. Developed in the late 1920's, these refrigerants were considered at the time to be less harmful than the commonly used refrigerants of the time, including methyl formate, ammonia, methyl chloride, and sulfur dioxide. The intent was to provide refrigeration equipment for home use without endangering the lives of the occupants. These CFC refrigerants answered that need. ## The Montreal Protocol As of 1989, CFC-based refrigerant was banned via the Montreal Protocol due to the negative effects it has on the ozone layer. The Montreal Protocol was ratified by most CFC producing and consuming nations in Montreal, Quebec, Canada in September 1987. Greenpeace objected to the ratification because the Montreal Protocol instead ratified the use of HFC refrigeration, which are not ozone depleting but are still powerful global warming gases. Searching for an alternative for home use refrigeration, dkk Scharfenstein (Germany) developed a propane-based CFC as well as an HFC-free refrigerator in 1992 with assistance from Greenpeace. The tenets of the Montreal Protocol were put into effect in the United States via the Clean Air Act legislation in August 1988. The Clean Air Act was further amended in 1990. This was a direct result of a scientific report released in June 1974 by Rowland-Molina, detailing how chlorine in CFC and HCFC refrigerants adversely affected the ozone layer. This report prompted the FDA and EPA to ban CFCs as a propellant in 1978 (50% of CFC use at that time was for aerosol can propellant). - In January 1992, the EPA required that refrigerant be recovered from all automotive air conditioning systems during system service. - In July 1992, the EPA made illegal the venting of CFC and HCFC refrigerants. - In June 1993, the EPA required that major leaks in refrigeration systems be fixed within 30 days. A major leak was defined as a leak rate that would equal 35% of the total refrigerant charge of the system (for industrial and commercial refrigerant systems), or 15% of the total refrigerant charge of the system (for all other large refrigerant systems), if that leak were to proceed for an entire year. - In July 1993, the EPA instituted the Safe Disposal Requirements, requiring that all refrigerant systems be evacuated prior to retirement or disposal (no matter the size of the system), and putting the onus on the last person in the disposal chain to ensure that the refrigerant was properly captured. - In August 1993, the EPA implemented reclamation requirements for refrigerant. If a refrigerant is to change ownership, it must be processed and tested to comply with the American Refrigeration Institute (ARI) standard 700-1993 (now ARI standard 700-1995) requirements for refrigerant purity. - In November 1993, the EPA required that all refrigerant recovery equipment meet the standards of ARI 740-1993. - In November 1995, the EPA also restricted the venting of HFC refrigerants. These contain no chlorine that can damage the ozone layer (and thus have an ODP (Ozone Depletion Potential) of zero), but still have a high global warming potential. - In December 1995, CFC refrigerant importation and production in the US was banned. It is currently planned to ban all HCFC refrigerant importation and production in the year 2030, although that will likely be accelerated. # Current applications of refrigeration Probably the most widely-used current applications of refrigeration are for the air-conditioning of private homes and public buildings, and the refrigeration of foodstuffs in homes, restaurants and large storage warehouses. The use of refrigerators in our kitchens for the storage of fruits and vegetables has allowed us to add fresh salads to our diets year round, and to store fish and meats safely for long periods. In commerce and manufacturing, there are many uses for refrigeration. Refrigeration is used to liquify gases like oxygen, nitrogen, propane and methane for example. In compressed air purification, it is used to condense water vapor from compressed air to reduce its moisture content. In oil refineries, chemical plants, and petrochemical plants, refrigeration is used to maintain certain processes at their required low temperatures (for example, in the alkylation of butenes and butane to produce a high octane gasoline component). Metal workers use refrigeration to temper steel and cutlery. In transporting temperature-sensitive foodstuffs and other materials by trucks, trains, airplanes and sea-going vessels, refrigeration is a necessity. Dairy products are constantly in need of refrigeration, and it was only discovered in the past few decades that eggs needed to be refrigerated during shipment rather than waiting to be refrigerated after arrival at the grocery store. Meats, poultry and fish all must be kept in climate-controlled environments before being sold. Refrigeration also helps keep fruits and vegetables edible longer. One of the most influential uses of refrigeration was in the development of the sushi/sashimi industry in Japan. Prior to the discovery of refrigeration, many sushi connoisseurs suffered great morbidity and mortality from diseases such as hepatitis A. However the dangers of unrefrigerated sashimi was not brought to light for decades due to the lack of research and healthcare distribution across rural Japan. Around mid-century, the Zojirushi corporation based in Kyoto made breakthroughs in refrigerator designs making refrigerators cheaper and more accessible for restaurant proprietors and the general public. # Methods of refrigeration Methods of refrigeration can be classified as non-cyclic, cyclic and thermoelectric. ## Non-cyclic refrigeration In these methods, refrigeration can be accomplished by melting ice or by subliming dry ice. These methods are used for small-scale refrigeration such as in laboratories and workshops, or in portable coolers. Ice owes its effectiveness as a cooling agent to its constant melting point of 0 °C (32 °F). In order to melt, ice must absorb 333.55 kJ/kg (approx. 144 Btu/lb) of heat. Foodstuffs maintained at this temperature or slightly above have an increased storage life. Solid carbon dioxide, known as dry ice, is used also as a refrigerant. Having no liquid phase at normal atmospheric pressure, it sublimes directly from the solid to vapor phase at a temperature of -78.5 °C (-109.3 °F). Dry ice is effective for maintaining products at low temperatures during the period of sublimation. ## Cyclic refrigeration This consists of a refrigeration cycle, where heat is removed from a low-temperature space or source and rejected to a high-temperature sink with the help of external work, and its inverse, the thermodynamic power cycle. In the power cycle, heat is supplied from a high-temperature source to the engine, part of the heat being used to produce work and the rest being rejected to a low-temperature sink. This satisfies the second law of thermodynamics. A refrigeration cycle describes the changes that take place in the refrigerant as it alternately absorbs and rejects heat as it circulates through a refrigerator. It is also applied to HVACR work, when describing the "process" of refrigerant flow through an HVACR unit, whether it is a packaged or split system. Heat naturally flows from hot to cold. Work is applied to cool a living space or storage volume by pumping heat from a lower temperature heat source into a higher temperature heat sink. Insulation is used to reduce the work and energy required to achieve and maintain a lower temperature in the cooled space. The operating principle of the refrigeration cycle was described mathematically by Sadi Carnot in 1824 as a heat engine. The most common types of refrigeration systems use the reverse-Rankine vapor-compression refrigeration cycle although absorption heat pumps are used in a minority of applications. Cyclic refrigeration can be classified as: - Vapor cycle, and - Gas cycle Vapor cycle refrigeration can further be classified as: - Vapor compression refrigeration - Vapor absorption refrigeration ### Vapor-compression cycle The vapor-compression cycle is used in most household refrigerators as well as in many large commercial and industrial refrigeration systems. Figure 1 provides a schematic diagram of the components of a typical vapor-compression refrigeration system. The thermodynamics of the cycle can be analyzed on a diagram as shown in Figure 2. In this cycle, a circulating refrigerant such as Freon enters the compressor as a vapor. From point 1 to point 2, the vapor is compressed at constant entropy and exits the compressor superheated. From point 2 to point 3 and on to point 4, the superheated vapor travels through the condenser which first cools and removes the superheat and then condenses the vapor into a liquid by removing additional heat at constant pressure and temperature. Between points 4 and 5, the liquid refrigerant goes through the expansion valve (also called a throttle valve) where its pressure abruptly decreases, causing flash evaporation and auto-refrigeration of, typically, less than half of the liquid. That results in a mixture of liquid and vapor at a lower temperature and pressure as shown at point 5. The cold liquid-vapor mixture then travels through the evaporator coil or tubes and is completely vaporized by cooling the warm air (from the space being refrigerated) being blown by a fan across the evaporator coil or tubes. The resulting refrigerant vapor returns to the compressor inlet at point 1 to complete the thermodynamic cycle. The above discussion is based on the ideal vapor-compression refrigeration cycle, and does not take into account real-world effects like frictional pressure drop in the system, slight thermodynamic irreversibility during the compression of the refrigerant vapor, or non-ideal gas behavior (if any). More information about the design and performance of vapor-compression refrigeration systems is available in the classic "Perry's Chemical Engineers' Handbook". ### Vapor absorption cycle In the early years of the twentieth century, the vapor absorption cycle using water-ammonia systems was popular and widely used but, after the development of the vapor compression cycle, it lost much of its importance because of its low coefficient of performance (about one fifth of that of the vapor compression cycle). Nowadays, the vapor absorption cycle is used only where waste heat is available, where heat is derived from solar collectors, or electricity is unavailable. The absorption cycle is similar to the compression cycle, except for the method of raising the pressure of the refrigerant vapor. In the absorption system, the compressor is replaced by an absorber which dissolves the refrigerant in a suitable liquid, a liquid pump which raises the pressure and a generator which, on heat addition, drives off the refrigerant vapor from the high-pressure liquid. Some work is required by the liquid pump but, for a given quantity of refrigerant, it is much smaller than needed by the compressor in the vapor compression cycle. In an absorption refrigerator, a suitable combination of refrigerant and absorbent is used. The most common combinations are ammonia (refrigerant) and water (absorbent), and water (refrigerant) and lithium bromide (absorbent). ### Gas cycle When the working fluid is a gas that is compressed and expanded but doesn't change phase, the refrigeration cycle is called a gas cycle. Air is most often this working fluid. As there is no condensation and evaporation intended in a gas cycle, components corresponding to the condenser and evaporator in a vapor compression cycle are the hot and cold gas-to-gas heat exchangers in gas cycles. The gas cycle is less efficient than the vapor compression cycle because the gas cycle works on the reverse Brayton cycle instead of the reverse Rankine cycle. As such the working fluid does not receive and reject heat at constant temperature. In the gas cycle, the refrigeration effect is equal to the product of the specific heat of the gas and the rise in temperature of the gas in the low temperature side. Therefore, for the same cooling load, a gas refrigeration cycle will require a large mass flow rate and would be bulky. Because of their lower efficiency and larger bulk, air cycle coolers are not often used nowadays in terrestrial cooling devices. The air cycle machine is very common, however, on gas turbine-powered 'jet' aircraft because compressed air is readily available from the engines' compressor sections. These jet aircraft's cooling and ventilation units also serve the purpose of pressurizing the aircraft. ## Thermoelectric refrigeration Thermoelectric cooling uses the Peltier effect to create a heat flux between the junction of two different types of materials. This effect is commonly used in camping and portable coolers and for cooling electronic components and small instruments. ## Magnetic refrigeration Magnetic refrigeration, or adiabatic demagnetization, is a cooling technology based on the magnetocaloric effect, an intrinsic property of magnetic solids. The refrigerant is often a paramagnetic salt, such as cerium magnesium nitrate. The active magnetic dipoles in this case are those of the electron shells of the paramagnetic atoms. A strong magnetic field is applied to the refrigerant, forcing its various magnetic dipoles to align and putting these degrees of freedom of the refrigerant into a state of lowered entropy. A heat sink then absorbs the heat released by the refrigerant due to its loss of entropy. Thermal contact with the heat sink is then broken so that the system is insulated, and the magnetic field is switched off. This increases the heat capacity of the refrigerant, thus decreasing its temperature below the temperature of the heat sink. Because few materials exhibit the required properties at room temperature, applications have so far been limited to cryogenics and research. ## Other methods Other methods of refrigeration include the Air cycle machine used in aircraft; the Vortex tube used for spot cooling, when compressed air is available; and Thermoacoustic refrigeration using sound waves in a pressurised gas to drive heat transfer and heat exchange. # Unit of refrigeration Domestic and commercial refrigerators may be rated in kJ/s, or Btu/h of cooling. Commercial refrigerators in the US are mostly rated in tons of refrigeration, but elsewhere in kW. One ton of refrigeration capacity can freeze one short ton of water at 0 °C (32 °F) in 24 hours. Based on that: A much less common definition is: 1 tonne of refrigeration is the rate of heat removal required to freeze a metric ton (i.e., 1000 kg) of water at 0 °C in 24 hours. Based on the heat of fusion being 333.55 kJ/kg, 1 tonne of refrigeration = 13,898 kJ/h = 3.861 kW. As can be seen, 1 tonne of refrigeration is 10% larger than 1 ton of refrigeration. Most residential air conditioning units range in capacity from about 1 to 5 tons of refrigeration.
Refrigeration Refrigeration is the process of removing heat from an enclosed space, or from a substance, and rejecting it elsewhere for the primary purpose of lowering the temperature of the enclosed space or substance and then maintaining that lower temperature. The term cooling refers generally to any natural or artificial process by which heat is dissipated. The process of artificially producing extreme cold temperatures is referred to as cryogenics. Cold is the absence of heat, hence in order to decrease a temperature, one "removes heat", rather than "adding cold." In order to satisfy the Second Law of Thermodynamics, some form of work must be performed to accomplish this. This work is traditionally done by mechanical work but can also be done by magnetism, laser or other means. However, all refrigeration uses the three basic methods of heat transfer: convection, conduction, or radiation. # Historical applications ## Ice harvesting The use of ice to refrigerate and thus preserve food goes back to prehistoric times.[1][2] Through the ages, the seasonal harvesting of snow and ice was a regular practice of most of the ancient cultures: Chinese, Hebrews, Greeks, Romans, Persians. Ice and snow were stored in caves or dugouts lined with straw or other insulating materials. The Persians stored ice in pits called yahairas. Rationing of the ice allowed the preservation of foods over the cold periods. This practice worked well down through the centuries, with icehouses remaining in use into the twentieth century. In the 16th century, the discovery of chemical refrigeration was one of the first steps toward artificial means of refrigeration. Sodium nitrate or potassium nitrate, when added to water, lowered the water temperature and created a sort of refrigeration bath for cooling substances. In Italy, such a solution was used to chill wine.[3] During the first half of the 19th century, ice harvesting became big business in America. New Englander Frederic Tudor, who became known as the "Ice King", worked on developing better insulation products for the long distance shipment of ice, especially to the tropics. ## First refrigeration systems The first known method of artificial refrigeration was demonstrated by William Cullen at the University of Glasgow in Scotland in 1748. Cullen used a pump to create a partial vacuum over a container of diethyl ether, which then boiled , absorbing heat from the surrounding air. The experiment even created a small amount of ice, but had no practical application at that time. In 1805, American inventor Oliver Evans designed but never built a refrigeration system based on the vapor-compression refrigeration cycle rather than chemical solutions or volatile liquids such as ethyl ether. In 1820, the British scientist Michael Faraday liquefied ammonia and other gases by using high pressures and low temperatures. An American living in Great Britain, Jacob Perkins, obtained the first patent for a vapor-compression refrigeration system in 1834. Perkins built a prototype system and it actually worked, although it did not succeed commercially.[4] In 1842, an American physician, John Gorrie, designed the first system for refrigerating water to produce ice. He also conceived the idea of using his refrigeration system to cool the air for comfort in homes and hospitals (i.e., air-conditioning). His system compressed air, then partially cooled the hot compressed air with water before allowing it to expand while doing part of the work required to drive the air compressor. That isentropic expansion cooled the air to a temperature low enough to freeze water and produce ice, or to flow "through a pipe for effecting refrigeration otherwise" as stated in his patent granted by the U.S. Patent Office in 1851.[5] Gorrie built a working prototype, but his system was a commercial failure. Alexander Twining began experimenting with vapor-compression refrigeration in 1848 and obtained patents in 1850 and 1853. He is credited with having initiated commercial refrigeration in the United States by 1856. Meanwhile, James Harrison who was born in Scotland and subsequently emigrated to Australia, begun operation of a mechanical ice-making machine in 1851 on the banks of the Barwon River at Rocky Point in Geelong. His first commercial ice-making machine followed in 1854 and his patent for an ether liquid-vapour compression refrigeration system was granted in 1855. Harrison introduced commercial vapor-compression refrigeration to breweries and meat packing houses and by 1861, a dozen of his systems were in operation. Australian, Argentinean and American concerns experimented with refrigerated shipping in the mid 1870s, the first commercial success coming when William Soltau Davidson fitted a compression refrigeration unit to the New Zealand vessel Dunedin in 1882, leading to a meat and dairy boom in Australasia and South America. The first gas absorption refrigeration system using gaseous ammonia dissolved in water (referred to as "aqua ammonia") was developed by Ferdinand Carré of France in 1859 and patented in 1860. Due to the toxicity of ammonia, such systems were not developed for use in homes, but were used to manufacture ice for sale. In the United States, the consumer public at that time still used the ice box with ice brought in from commercial suppliers, many of whom were still harvesting ice and storing it in an icehouse. Thaddeus Lowe, an American balloonist from the Civil War, had experimented over the years with the properties of gases. One of his mainstay enterprises was the high-volume production of hydrogen gas. He also held several patents on ice making machines. His "Compression Ice Machine" would revolutionize the cold storage industry. In 1869 he and other investors purchased an old steamship onto which they loaded one of Lowe’s refrigeration units and began shipping fresh fruit from New York to the Gulf Coast area, and fresh meat from Galveston, Texas back to New York. Because of Lowe’s lack of knowledge about shipping, the business was a costly failure, and it was difficult for the public to get used to the idea of being able to consume meat that had been so long out of the packing house. Domestic mechanical refrigerators became available in the United States around 1911.[6] ## Widespread commercial use By the 1870s breweries had become the largest users of commercial refrigeration units, though some still relied on harvested ice. Though the ice-harvesting industry had grown immensely by the turn of the 20th century, pollution and sewage had begun to creep into natural ice making it a problem in the metropolitan suburbs. Eventually breweries began to complain of tainted ice. This raised demand for more modern and consumer-ready refrigeration and ice-making machines. In 1895 German engineer Carl von Linde set up a large-scale process for the production of liquid air and eventually liquid oxygen for use in safe household refrigerators. Refrigerated railroad cars were introduced in the US in the 1840s for the short-run transportation of dairy products. In 1867 J.B. Sutherland of Detroit, Michigan patented the refrigerator car designed with ice tanks at either end of the car and ventilator flaps near the floor which would create a gravity draft of cold air through the car. By 1900 the meat packing houses of Chicago had adopted ammonia-cycle commercial refrigeration. By 1914 almost every location used artificial refrigeration. The big meat packers, Armour, Swift, and Wilson, had purchased the most expensive units which they installed on train cars and in branch houses and storage facilities in the more remote distribution areas. It was not until the middle of the 20th century that refrigeration units were designed for installation on tractor-trailer rigs (trucks or lorries). Refrigerated vehicles are used to transport perishable goods, such as frozen foods, fruit and vegetables, and temperature-sensitive chemicals. Most modern refrigerators keep the temperature between -40 and +20 °C and have a maximum payload of around 24 000 kg. gross weight (in Europe). ## Home and consumer use With the invention of synthetic refrigerations based mostly on a chlorofluorocarbon (CFC) chemical, safer refrigerators were possible for home and consumer use. Freon is a trademark of the Dupont Corporation and refers to these CFC, and later hydrochlorofluorocarbon (HCFC) and hydrofluorocarbon (HFC), refrigerants. Developed in the late 1920's, these refrigerants were considered at the time to be less harmful than the commonly used refrigerants of the time, including methyl formate, ammonia, methyl chloride, and sulfur dioxide. The intent was to provide refrigeration equipment for home use without endangering the lives of the occupants. These CFC refrigerants answered that need. ## The Montreal Protocol As of 1989, CFC-based refrigerant was banned via the Montreal Protocol due to the negative effects it has on the ozone layer. The Montreal Protocol was ratified by most CFC producing and consuming nations in Montreal, Quebec, Canada in September 1987. Greenpeace objected to the ratification because the Montreal Protocol instead ratified the use of HFC refrigeration, which are not ozone depleting but are still powerful global warming gases. Searching for an alternative for home use refrigeration, dkk Scharfenstein (Germany) developed a propane-based CFC as well as an HFC-free refrigerator in 1992 with assistance from Greenpeace.[citation needed] The tenets of the Montreal Protocol were put into effect in the United States via the Clean Air Act legislation in August 1988. The Clean Air Act was further amended in 1990. This was a direct result of a scientific report released in June 1974 by Rowland-Molina[7], detailing how chlorine in CFC and HCFC refrigerants adversely affected the ozone layer. This report prompted the FDA and EPA to ban CFCs as a propellant in 1978 (50% of CFC use at that time was for aerosol can propellant). - In January 1992, the EPA required that refrigerant be recovered from all automotive air conditioning systems during system service. - In July 1992, the EPA made illegal the venting of CFC and HCFC refrigerants. - In June 1993, the EPA required that major leaks in refrigeration systems be fixed within 30 days. A major leak was defined as a leak rate that would equal 35% of the total refrigerant charge of the system (for industrial and commercial refrigerant systems), or 15% of the total refrigerant charge of the system (for all other large refrigerant systems), if that leak were to proceed for an entire year. - In July 1993, the EPA instituted the Safe Disposal Requirements, requiring that all refrigerant systems be evacuated prior to retirement or disposal (no matter the size of the system), and putting the onus on the last person in the disposal chain to ensure that the refrigerant was properly captured. - In August 1993, the EPA implemented reclamation requirements for refrigerant. If a refrigerant is to change ownership, it must be processed and tested to comply with the American Refrigeration Institute (ARI) standard 700-1993 (now ARI standard 700-1995) requirements for refrigerant purity. - In November 1993, the EPA required that all refrigerant recovery equipment meet the standards of ARI 740-1993. - In November 1995, the EPA also restricted the venting of HFC refrigerants. These contain no chlorine that can damage the ozone layer (and thus have an ODP (Ozone Depletion Potential) of zero), but still have a high global warming potential. - In December 1995, CFC refrigerant importation and production in the US was banned. It is currently planned to ban all HCFC refrigerant importation and production in the year 2030, although that will likely be accelerated. # Current applications of refrigeration Probably the most widely-used current applications of refrigeration are for the air-conditioning of private homes and public buildings, and the refrigeration of foodstuffs in homes, restaurants and large storage warehouses. The use of refrigerators in our kitchens for the storage of fruits and vegetables has allowed us to add fresh salads to our diets year round, and to store fish and meats safely for long periods. In commerce and manufacturing, there are many uses for refrigeration. Refrigeration is used to liquify gases like oxygen, nitrogen, propane and methane for example. In compressed air purification, it is used to condense water vapor from compressed air to reduce its moisture content. In oil refineries, chemical plants, and petrochemical plants, refrigeration is used to maintain certain processes at their required low temperatures (for example, in the alkylation of butenes and butane to produce a high octane gasoline component). Metal workers use refrigeration to temper steel and cutlery. In transporting temperature-sensitive foodstuffs and other materials by trucks, trains, airplanes and sea-going vessels, refrigeration is a necessity. Dairy products are constantly in need of refrigeration, and it was only discovered in the past few decades that eggs needed to be refrigerated during shipment rather than waiting to be refrigerated after arrival at the grocery store. Meats, poultry and fish all must be kept in climate-controlled environments before being sold. Refrigeration also helps keep fruits and vegetables edible longer. One of the most influential uses of refrigeration was in the development of the sushi/sashimi industry in Japan. Prior to the discovery of refrigeration, many sushi connoisseurs suffered great morbidity and mortality from diseases such as hepatitis A[citation needed]. However the dangers of unrefrigerated sashimi was not brought to light for decades due to the lack of research and healthcare distribution across rural Japan. Around mid-century, the Zojirushi corporation based in Kyoto made breakthroughs in refrigerator designs making refrigerators cheaper and more accessible for restaurant proprietors and the general public. # Methods of refrigeration Methods of refrigeration can be classified as non-cyclic, cyclic and thermoelectric. ## Non-cyclic refrigeration In these methods, refrigeration can be accomplished by melting ice or by subliming dry ice. These methods are used for small-scale refrigeration such as in laboratories and workshops, or in portable coolers. Ice owes its effectiveness as a cooling agent to its constant melting point of 0 °C (32 °F). In order to melt, ice must absorb 333.55 kJ/kg (approx. 144 Btu/lb) of heat. Foodstuffs maintained at this temperature or slightly above have an increased storage life. Solid carbon dioxide, known as dry ice, is used also as a refrigerant. Having no liquid phase at normal atmospheric pressure, it sublimes directly from the solid to vapor phase at a temperature of -78.5 °C (-109.3 °F). Dry ice is effective for maintaining products at low temperatures during the period of sublimation. ## Cyclic refrigeration This consists of a refrigeration cycle, where heat is removed from a low-temperature space or source and rejected to a high-temperature sink with the help of external work, and its inverse, the thermodynamic power cycle. In the power cycle, heat is supplied from a high-temperature source to the engine, part of the heat being used to produce work and the rest being rejected to a low-temperature sink. This satisfies the second law of thermodynamics. A refrigeration cycle describes the changes that take place in the refrigerant as it alternately absorbs and rejects heat as it circulates through a refrigerator. It is also applied to HVACR work, when describing the "process" of refrigerant flow through an HVACR unit, whether it is a packaged or split system. Heat naturally flows from hot to cold. Work is applied to cool a living space or storage volume by pumping heat from a lower temperature heat source into a higher temperature heat sink. Insulation is used to reduce the work and energy required to achieve and maintain a lower temperature in the cooled space. The operating principle of the refrigeration cycle was described mathematically by Sadi Carnot in 1824 as a heat engine. The most common types of refrigeration systems use the reverse-Rankine vapor-compression refrigeration cycle although absorption heat pumps are used in a minority of applications. Cyclic refrigeration can be classified as: - Vapor cycle, and - Gas cycle Vapor cycle refrigeration can further be classified as: - Vapor compression refrigeration - Vapor absorption refrigeration ### Vapor-compression cycle The vapor-compression cycle is used in most household refrigerators as well as in many large commercial and industrial refrigeration systems. Figure 1 provides a schematic diagram of the components of a typical vapor-compression refrigeration system. The thermodynamics of the cycle can be analyzed on a diagram[8][9] as shown in Figure 2. In this cycle, a circulating refrigerant such as Freon enters the compressor as a vapor. From point 1 to point 2, the vapor is compressed at constant entropy and exits the compressor superheated. From point 2 to point 3 and on to point 4, the superheated vapor travels through the condenser which first cools and removes the superheat and then condenses the vapor into a liquid by removing additional heat at constant pressure and temperature. Between points 4 and 5, the liquid refrigerant goes through the expansion valve (also called a throttle valve) where its pressure abruptly decreases, causing flash evaporation and auto-refrigeration of, typically, less than half of the liquid. That results in a mixture of liquid and vapor at a lower temperature and pressure as shown at point 5. The cold liquid-vapor mixture then travels through the evaporator coil or tubes and is completely vaporized by cooling the warm air (from the space being refrigerated) being blown by a fan across the evaporator coil or tubes. The resulting refrigerant vapor returns to the compressor inlet at point 1 to complete the thermodynamic cycle. The above discussion is based on the ideal vapor-compression refrigeration cycle, and does not take into account real-world effects like frictional pressure drop in the system, slight thermodynamic irreversibility during the compression of the refrigerant vapor, or non-ideal gas behavior (if any). More information about the design and performance of vapor-compression refrigeration systems is available in the classic "Perry's Chemical Engineers' Handbook".[10] ### Vapor absorption cycle In the early years of the twentieth century, the vapor absorption cycle using water-ammonia systems was popular and widely used but, after the development of the vapor compression cycle, it lost much of its importance because of its low coefficient of performance (about one fifth of that of the vapor compression cycle). Nowadays, the vapor absorption cycle is used only where waste heat is available, where heat is derived from solar collectors, or electricity is unavailable. The absorption cycle is similar to the compression cycle, except for the method of raising the pressure of the refrigerant vapor. In the absorption system, the compressor is replaced by an absorber which dissolves the refrigerant in a suitable liquid, a liquid pump which raises the pressure and a generator which, on heat addition, drives off the refrigerant vapor from the high-pressure liquid. Some work is required by the liquid pump but, for a given quantity of refrigerant, it is much smaller than needed by the compressor in the vapor compression cycle. In an absorption refrigerator, a suitable combination of refrigerant and absorbent is used. The most common combinations are ammonia (refrigerant) and water (absorbent), and water (refrigerant) and lithium bromide (absorbent). ### Gas cycle When the working fluid is a gas that is compressed and expanded but doesn't change phase, the refrigeration cycle is called a gas cycle. Air is most often this working fluid. As there is no condensation and evaporation intended in a gas cycle, components corresponding to the condenser and evaporator in a vapor compression cycle are the hot and cold gas-to-gas heat exchangers in gas cycles. The gas cycle is less efficient than the vapor compression cycle because the gas cycle works on the reverse Brayton cycle instead of the reverse Rankine cycle. As such the working fluid does not receive and reject heat at constant temperature. In the gas cycle, the refrigeration effect is equal to the product of the specific heat of the gas and the rise in temperature of the gas in the low temperature side. Therefore, for the same cooling load, a gas refrigeration cycle will require a large mass flow rate and would be bulky. Because of their lower efficiency and larger bulk, air cycle coolers are not often used nowadays in terrestrial cooling devices. The air cycle machine is very common, however, on gas turbine-powered 'jet' aircraft because compressed air is readily available from the engines' compressor sections. These jet aircraft's cooling and ventilation units also serve the purpose of pressurizing the aircraft. ## Thermoelectric refrigeration Thermoelectric cooling uses the Peltier effect to create a heat flux between the junction of two different types of materials. This effect is commonly used in camping and portable coolers and for cooling electronic components and small instruments. ## Magnetic refrigeration Magnetic refrigeration, or adiabatic demagnetization, is a cooling technology based on the magnetocaloric effect, an intrinsic property of magnetic solids. The refrigerant is often a paramagnetic salt, such as cerium magnesium nitrate. The active magnetic dipoles in this case are those of the electron shells of the paramagnetic atoms. A strong magnetic field is applied to the refrigerant, forcing its various magnetic dipoles to align and putting these degrees of freedom of the refrigerant into a state of lowered entropy. A heat sink then absorbs the heat released by the refrigerant due to its loss of entropy. Thermal contact with the heat sink is then broken so that the system is insulated, and the magnetic field is switched off. This increases the heat capacity of the refrigerant, thus decreasing its temperature below the temperature of the heat sink. Because few materials exhibit the required properties at room temperature, applications have so far been limited to cryogenics and research. ## Other methods Other methods of refrigeration include the Air cycle machine used in aircraft; the Vortex tube used for spot cooling, when compressed air is available; and Thermoacoustic refrigeration using sound waves in a pressurised gas to drive heat transfer and heat exchange. # Unit of refrigeration Domestic and commercial refrigerators may be rated in kJ/s, or Btu/h of cooling. Commercial refrigerators in the US are mostly rated in tons of refrigeration, but elsewhere in kW. One ton of refrigeration capacity can freeze one short ton of water at 0 °C (32 °F) in 24 hours. Based on that: A much less common definition is: 1 tonne of refrigeration is the rate of heat removal required to freeze a metric ton (i.e., 1000 kg) of water at 0 °C in 24 hours. Based on the heat of fusion being 333.55 kJ/kg, 1 tonne of refrigeration = 13,898 kJ/h = 3.861 kW. As can be seen, 1 tonne of refrigeration is 10% larger than 1 ton of refrigeration. Most residential air conditioning units range in capacity from about 1 to 5 tons of refrigeration.
https://www.wikidoc.org/index.php/Refrigeration
12199668cb086f7130dbdbe060c6ab026299e64f
wikidoc
Reinforcement
Reinforcement In operant conditioning, reinforcement is an increase in the strength of a response following the change in environment immediately following that response. Response strength can be assessed by measures such as the frequency with which the response is made (for example, a pigeon may peck a key more times in the session), or the speed with which it is made (for example, a rat may run a maze faster). The environment change contingent upon the response is called a reinforcer. Reinforcement can only be confirmed retrospectively, as objects, items, food or other potential 'reinforcers' can only be called such by demonstrating increases in behavior after their administration. It is the strength of the response that is reinforced, not the organism. # Types of reinforcement B.F. Skinner, the researcher who articulated the major theoretical constructs of reinforcement and behaviorism, refused to specify causal origins of reinforcers. Skinner argued that reinforcers are defined by a change in response strength (that is, functionally rather than causally), and that what is a reinforcer to one person may not be to another. Accordingly, activities, foods or items which are generally considered pleasant or enjoyable may not necessarily be reinforcing; they can only be considered so if the behavior that immediately precedes the potential reinforcer increases in similar future situations. If a child receives a cookie when he or she asks for one, and the frequency of 'cookie-requesting behavior' increases, the cookie can be seen as reinforcing 'cookie-requesting behavior'. If however, cookie-requesting behavior does not increase, the cookie cannot be considered reinforcing. The sole criterion which can determine if an item, activity or food is reinforcing is the change in the probability of a behavior after the administration of a potential reinforcer. Other theories may focus on additional factors such as whether the person expected the strategy to work at some point, but a behavioral theory of reinforcement would focus specifically upon the probability of the behavior. The study of reinforcement has produced an enormous body of reproducible experimental results. Reinforcement is the central concept and procedure in the experimental analysis of behavior and much of quantitative analysis of behavior. - Positive reinforcement is an increase in the future frequency of a behavior due to the addition of a stimulus immediately following a response. Giving (or adding) food to a dog contingent on its sitting is an example of positive reinforcement (if this results in an increase in the future behavior of the dog sitting). - Negative reinforcement is an increase in the future frequency of a behavior when the consequence is the removal of an aversive stimulus. Turning off (or removing) an annoying song when a child asks their parent is an example of negative reinforcement (if this results in an increase in asking behavior of the child in the future). Avoidance conditioning is a form of negative reinforcement that occurs when a behavior prevents an aversive stimulus from starting or being applied. - Avoidance conditioning is a form of negative reinforcement that occurs when a behavior prevents an aversive stimulus from starting or being applied. Skinner discusses that while it may appear so, Punishment is not the opposite of reinforcement. Rather, it has some other effects as well as decreasing undesired behavior. Distinguishing "positive" from "negative" can be difficult, and the necessity of the distinction is often debated. For example, in a very warm room, a current of external air serves as positive reinforcement because it is pleasantly cool or negative reinforcement because it removes uncomfortably hot air. Some reinforcement can be simultaneously positive and negative, such as a drug addict taking drugs for the added euphoria and eliminating withdrawal symptoms. Many behavioral psychologists simply refer to reinforcement or punishment—without polarity—to cover all consequent environmental changes. ## Primary reinforcers A primary reinforcer, sometimes called an unconditioned reinforcer, is a stimulus that does not require pairing to function as a reinforcer and most likely has obtained this function through the evolution and its role in species' survival. Examples of primary reinforcers include sleep, food, air, water, and sex. Other primary reinforcers, such as certain drugs, may mimic the effects of other primary reinforcers. While these primary reinforcers are fairly stable through life and across individuals, the reinforcing value of different primary reinforcers varies due to multiple factors (e.g., genetics, experience). Thus, one person may prefer one type of food while another abhors it. Or one person may eat lots of food while another eats very little. So even though food is a primary reinforcer for both individuals, the value of food as a reinforcer differs between them. Often primary reinforcers shift their reinforcing value temporarily through satiation and deprivation. Food, for example, may cease to be effective as a reinforcer after a certain amount of it has been consumed (satiation). After a period during which it does not receive any of the primary reinforcer (deprivation), however, the primary reinforcer may once again regain its effectiveness in increasing response strength. ## Secondary reinforcers A secondary reinforcer, sometimes called a conditioned reinforcer, is a stimulus or situation that has acquired its function as a reinforcer after pairing with a stimulus which functions as a reinforcer. This stimulus may be a primary reinforcer or another conditioned reinforcer (such as money). An example of a secondary reinforcer would be the sound from a clicker, as used in clicker training. The sound of the clicker has been associated with praise or treats, and subsequently, the sound of the clicker may function as a reinforcer. As with primary reinforcers, an organism can experience satiation and deprivation with secondary reinforcers. ## Other reinforcement terms - A generalized reinforcer is a conditioned reinforcer that has obtained the reinforcing function by pairing with many other reinforcers (such as money, a secondary generalized reinforcer). - In reinforcer sampling a potentially reinforcing but unfamiliar stimulus is presented to an organism without regard to any prior behavior. The stimulus may then later be used more effectively in reinforcement. - Socially mediated reinforcement (direct reinforcement) involves the delivery of reinforcement which requires the behavior of another organism. - Premack principle is a special case of reinforcement elaborated by David Premack, which states that a highly preferred activity can be used effectively as a reinforcer for a less preferred activity. - Reinforcement hierarchy is a list of actions, rank-ordering the most desirable to least desirable consequences that may serve as a reinforcer. A reinforcement hierarchy can be used to determine the relative frequency and desirability of different activities, and is often employed when applying the Premack principle. - Contingent outcomes are more likely to reinforce behavior than non-contingent responses. Contingent outcomes are those directly linked to a causal behavior, such a light turning on being contingent on flipping a switch. Note that contingent outcomes are not necessary to demonstrate reinforcement, but perceived contingency may increase learning. - Contiguous stimuli are stimuli closely associated by time and space with specific behaviors. They reduce the amount of time needed to learn a behavior while increasing its resistance to extinction. Giving a dog a piece of food immediately after sitting is more contiguous with (and therefore more likely to reinforce) the behavior than a several minute delay in food delivery following the behavior. - Noncontingent reinforcement refers to response-independent delivery of stimuli identified serve as reinforcers for some behaviors of that organism. However, this typically entails time-based delivery of stimuli identified as maintaining aberrant behavior, which serves to decrease the rate of the target behavior. As no measured behavior is identified as being strengthened, there is controversy surrounding the use of the term noncontingent "reinforcement". # Natural and artificial reinforcement In his 1967 paper, Arbitrary and Natural Reinforcement, Charles Ferster proposed that reinforcement can be classified into events which increase the frequency of an operant as a natural consequence of the behavior itself, and those which are presumed to affect frequency by their requirement of human mediation, such as in a token economy where subjects are "rewarded" for certain behavior with an arbitrary token of a negotiable value. In 1970, Baer and Wolf created a name for the use of natural reinforcers called behavior traps. A behavior trap is one in which only a simple response is necessary to enter the trap, yet once entered, the trap cannot be resisted in creating general behavior change. It is the use of a behavioral trap that will increase one's repertoire by exposing a person to the naturally occurring reinforcement of that behavior. Behavior traps have four characteristics: - They are "baited" with virtually irresistible reinforcers that "lure" the student to the trap - Only a low-effort response already in the repertoire is necessary to enter the trap - Interrelated contingencies of reinforcement inside the trap motivate the person to acquire, extend, and maintain targeted academic/social skills - they can remain effective for long time because the person shows few, if any, satiation effects. As can be seen from the above, artificial reinforcement is created to build or develop skills, and to generalize, it is important that either a behavior trap is introduced to 'capture' the skill and utilize naturally occurring reinforcement to maintain or increase it. This behavior trap may simply be a social situation that will generally result from a specific behavior once it has met a certain criterion (ex: if you use edible reinforcers to train a person to say hello and smile at people when they meet them, after that skill has been built up, the natural reinforcer of other people smiling, and having more friendly interactions will naturally reinforce the skill and the edibles can be faded). # Schedules of reinforcement When an animal's surroundings are controlled, its behavior patterns after reinforcement become predictable, even for very complex behavior patterns. A schedule of reinforcement is the protocol for determining when responses or behaviors will be reinforced, ranging from continuous reinforcement, in which every response is reinforced, and extinction, in which no response is reinforced. Between these extremes is intermittent or partial reinforcement where only some responses are reinforced. Specific variations of intermittent reinforcement reliably induce specific patterns of response, irrespective of the species being investigated (including humans in some conditions). The orderliness and predictability of behaviour under schedules of reinforcement was evidence for B. F. Skinner's claim that using operant conditioning he could obtain "control over behaviour", in a way that rendered the theoretical disputes of contemporary comparative psychology obsolete. The reliability of schedule control supported the idea that a radical behaviourist experimental analysis of behavior could be the foundation for a psychology that did not refer to mental or cognitive processes. The reliability of schedules also led to the development of Applied Behavior Analysis as a means of controlling or altering behavior. Many of the simpler possibilities, and some of the more complex ones, were investigated at great length by Skinner using pigeons, but new schedules continue to be defined and investigated. ## Simple schedules Simple schedules have a single rule to determine when a single type of reinforcer is delivered for specific response. - Fixed ratio (FR) schedules deliver reinforcement after every nth response Example: FR2 = every second response is reinforced Lab example: FR5 = rat reinforced with food after each 5 bar-presses in a Skinner box. Real-world example: FR10 = Used car dealer gets a $1000 bonus for each 10 cars sold on the lot. - Example: FR2 = every second response is reinforced - Lab example: FR5 = rat reinforced with food after each 5 bar-presses in a Skinner box. - Real-world example: FR10 = Used car dealer gets a $1000 bonus for each 10 cars sold on the lot. - Continuous ratio (CRF) schedules are a special form of a fixed ratio. In a continuous ratio schedule, reinforcement follows each and every response. Lab example: each time a rat presses a bar it gets a pellet of food Real world example: each time a dog defecates outside its owner gives it a treat - Lab example: each time a rat presses a bar it gets a pellet of food - Real world example: each time a dog defecates outside its owner gives it a treat - Fixed interval (FI) schedules deliver reinforcement for the first response after a fixed length of time since the last reinforcement, while premature responses are not reinforced. Example: FI1" = reinforcement provided for the first response after 1 second Lab example: FI15" = rat is reinforced for the first bar press after 15 seconds passes since the last reinforcement Real world example: FI24 hour = calling a radio station is reinforced with a chance to win a prize, but the person can only sign up once per day - Example: FI1" = reinforcement provided for the first response after 1 second - Lab example: FI15" = rat is reinforced for the first bar press after 15 seconds passes since the last reinforcement - Real world example: FI24 hour = calling a radio station is reinforced with a chance to win a prize, but the person can only sign up once per day - Variable ratio (VR) schedules deliver reinforcement after a random number of responses (based upon a predetermined average) Example: VR3 = on average, every third response is reinforced Lab example: VR10 = on average, a rat is reinforced for each 10 bar presses Real world example: VR37 = a roulette player betting on specific numbers will win on average one every 37 tries (on a U.S. roulette wheel, this would be VR38) - Example: VR3 = on average, every third response is reinforced - Lab example: VR10 = on average, a rat is reinforced for each 10 bar presses - Real world example: VR37 = a roulette player betting on specific numbers will win on average one every 37 tries (on a U.S. roulette wheel, this would be VR38) - Variable interval (VI) schedules deliver reinforcement for the first response after a random average length of time passes since the last reinforcement Example: VI3" = reinforcement is provided for the first response after an average of 3 seconds since the last reinforcement. Lab example: VI10" = a rat is reinforced for the first bar press after an average of 10 seconds passes since the last reinforcement Real world example: a predator can expect to come across a prey on a variable interval schedule - Example: VI3" = reinforcement is provided for the first response after an average of 3 seconds since the last reinforcement. - Lab example: VI10" = a rat is reinforced for the first bar press after an average of 10 seconds passes since the last reinforcement - Real world example: a predator can expect to come across a prey on a variable interval schedule Other simple schedules include: - Differential reinforcement of incompatible behavior (DRI) is used to reduce a frequent behavior without punishing it by reinforcing an incompatible response. An example would be reinforcing clapping to reduce nose picking. - Differential reinforcement of other behavior (DRO) is used to reduce a frequent behavior by reinforcing any behavior other than the undesired one. An example would be reinforcing any hand action other than nose picking. - Differential reinforcement of low response rate (DRL) is used to encourage low rates of responding. It is like an interval schedule, except that premature responses reset the time required between behavior. Lab example: DRL10" = a rat is reinforced for the first response after 10 seconds, but if the rat responds earlier than 10 seconds there is no reinforcement and the rat has to wait 10 seconds from that premature response without another response before bar pressing will lead to reinforcement. Real world example: "If you ask me for a potato chip no more than once every 10 minutes, I will give it to you. If you ask more often, I will give you none." - Lab example: DRL10" = a rat is reinforced for the first response after 10 seconds, but if the rat responds earlier than 10 seconds there is no reinforcement and the rat has to wait 10 seconds from that premature response without another response before bar pressing will lead to reinforcement. - Real world example: "If you ask me for a potato chip no more than once every 10 minutes, I will give it to you. If you ask more often, I will give you none." - Differential reinforcement of high rate (DRH) is used to increase high rates of responding. It is like an interval schedule, except that a minimum number of responses are required in the interval in order to receive reinforcement. Lab example: DRH10"/15 responses = a rat must press a bar 15 times within a 10 second increment in order to be reinforced Real world example: "If Lance Armstrong is going to win the Tour de France he has to pedal x number of times during the y hour race." - Lab example: DRH10"/15 responses = a rat must press a bar 15 times within a 10 second increment in order to be reinforced - Real world example: "If Lance Armstrong is going to win the Tour de France he has to pedal x number of times during the y hour race." - Fixed Time (FT) provides reinforcement at a fixed time since the last reinforcement, irrespective of whether the subject has responded or not. In other words, it is a non-contingent schedule - Lab example: FT5": rat gets food every 5" regardless of the behavior. Real world example: a person gets an annuity check every month regardless of behavior between checks - Lab example: FT5": rat gets food every 5" regardless of the behavior. - Real world example: a person gets an annuity check every month regardless of behavior between checks - Variable Time (VT) provides reinforcement at an average variable time since last reinforcement, regardless of whether the subject has responded or not. ### Effects of different types of simple schedules - Ratio schedules produce higher rates of responding than interval schedules, when the rates of reinforcement are otherwise similar. - Variable schedules produce higher rates and greater resistance to extinction than most fixed schedules. This is also known as the Partial Reinforcement Extinction Effect (PREE) - The variable ratio schedule produces both the highest rate of responding and the greatest resistance to extinction (an example would be the behavior of gamblers at slot machines) - Fixed schedules produce 'post-reinforcement pauses' (PRP), where responses will briefly cease immediately following reinforcement, though the pause is a function of the upcoming response requirement rather than the prior reinforcement. The PRP of a fixed interval schedule is frequently followed by an accelerating rate of response which is "scallop shaped," while those of fixed ratio schedules are more angular. - The PRP of a fixed interval schedule is frequently followed by an accelerating rate of response which is "scallop shaped," while those of fixed ratio schedules are more angular. - Organisms whose schedules of reinforcement are 'thinned' (that is, requiring more responses or a greater wait before reinforcement) may experience 'ratio strain' if thinned too quickly. This produces behavior similar to that seen during extinction. - Partial reinforcement schedules are more resistant to extinction than continuous reinforcement schedules. Ratio schedules are more resistant than interval schedules and variable schedules more resistant than fixed ones. - Ratio schedules are more resistant than interval schedules and variable schedules more resistant than fixed ones. ## Compound schedules Compound schedules combine two or more different simple schedules in some way using the same reinforcer for the same behaviour. There are many possibilities; among those most often used are: - Alternative schedules - A type of compound schedule where two or more simple schedules are in effect and which ever simple schedule is completed first results in reinforcement. - Conjunctive schedules - A complex schedule of reinforcement where two or more simple schedules are in effect independently of each other and requirements on all of the simple schedules must be met for reinforcement. - Multiple schedules - either of two, or more, schedules may occur with a stimulus indicating which is in force. Example: FR4 when given a whistle and FI 6 when given a bell ring. - Example: FR4 when given a whistle and FI 6 when given a bell ring. - Mixed schedules - either of two, or more, schedules may occur with no stimulus indicating which is in force. Example: FI6 and then VR 3 without any stimulus warning of the change in schedule. - Example: FI6 and then VR 3 without any stimulus warning of the change in schedule. - Concurrent schedules - two schedules are simultaneously in force though not necessarily on two different response devices, and reinforcement on those schedules is independent of each other. - Interlocking Schedules - A single schedule with two components where progress in one component affects progress in the other component. An interlocking FR60-FI120, for example, each response subtracts time from the interval component such that each response is "equal" to removing two seconds from the FI. - Chained schedules - reinforcement occurs after two or more successive schedules have been completed, with a stimulus indicating when one schedule has been completed and the next has started. Example: FR10 in a green light when completed it goes to a yellow light to indicate FR 3, after it's completed it goes into red light to indicate VI 6, etc. At the end of the chain, a reinforcer is given. - Example: FR10 in a green light when completed it goes to a yellow light to indicate FR 3, after it's completed it goes into red light to indicate VI 6, etc. At the end of the chain, a reinforcer is given. - Tandem schedules - reinforcement occurs when two or more successive schedule requirements have been completed, with no stimulus indicating when a schedule has been completed and the next has started. Example: VR 10, after it is completed the schedule is changed without warning to FR 10, after that it is changed without warning to FR 16, etc. At the end of the series of schedules, a reinforcer is finally given. - Example: VR 10, after it is completed the schedule is changed without warning to FR 10, after that it is changed without warning to FR 16, etc. At the end of the series of schedules, a reinforcer is finally given. - Higher order schedules - completion of one schedule is reinforced according to a second schedule; e.g. in FR2 (FI 10 secs), two successive fixed interval schedules would have to be completed before a response is reinforced. ## Superimposed schedules Superimposed schedules of reinforcement is a term in psychology which refers to a structure of rewards where two or more simple schedules of reinforcement operate simultaneously. The reinforcers can be positive and/or negative. An example would be a person who comes home after a long day at work. The behavior of opening the front door is rewarded by a big kiss on lips by the person's spouse and a rip in the pants from the family dog jumping enthusiastically. Another example of superimposed schedules of reinforcement would be a pigeon in an experimental cage pecking at a button. The pecks result in a hopper of grain being delivered every twentieth peck and access to water becoming available after every two hundred pecks. Superimposed schedules of reinforcement are a type of compound schedule that evolved from the initial work on simple schedules of reinforcement by B. F. Skinner and his colleagues (Skinner and Ferster, 1957). They demonstrated that reinforcers could be delivered on schedules, and further that organisms behaved differently under different schedules. Rather than a reinforcer, such as food or water, being delivered every time as a consequence of some behavior, a reinforcer could be delivered after more than one instance of the behavior. For example, a pigeon may be required to peck a button switch ten times before food is made available to the pigeon. This is called a "ratio schedule." Also, a reinforcer could be delivered after an interval of time passed following a target behavior. An example is a rat that is given a food pellet two minutes after the rat pressed a lever. This is called an "interval schedule." In addition, ratio schedules can deliver reinforcement following fixed or variable number of behaviors by the individual organism. Likewise, interval schedules can deliver reinforcement following fixed or variable intervals of time following a single response by the organism. Individual behaviors tend to generate response rates that differ based upon how the reinforcement schedule is created. Much subsequent research in many labs examined the effects on behaviors of scheduling reinforcers. If an organism is offered the opportunity to choose between or among two or more simple schedules of reinforcement at the same time, the reinforcement structure is called a "concurrent schedule of reinforcement." Brechner (1974, 1977) introduced the concept of "superimposed schedules of reinforcement in an attempt to create a laboratory analogy of social traps, such as when humans overharvest their fisheries or tear down their rainforests. Brechner created a situation where simple reinforcement schedules were superimposed upon each other. In other words, a single response or group of responses by an organism led to multiple consequences. Concurrent schedules of reinforcement can be thought of as "or" schedules, and superimposed schedules of reinforcement can be thought of as "and" schedules. Brechner and Linder (1981) and Brechner (1987) expanded the concept to describe how superimposed schedules and the social trap analogy could be used to analyze the way energy flows through systems. Superimposed schedules of reinforcement have many real-world applications in addition to generating social traps. Many different human individual and social situations can be created by superimposing simple reinforcement schedules. For example a human being could have simultaneous tobacco and alcohol addictions. Even more complex situations can be created or simulated by superimposing two or more concurrent schedules. For example, a high school senior could have a choice between going to Stanford University or UCLA, and at the same time have the choice of going into the Army or the Air Force, and simultaneously the choice of taking a job with an internet company or a job with a software company. That would be a reinforcement structure of three superimposed concurrent schedules of reinforcement. Superimposed schedules of reinforcement can be used to create the three classic conflict situations (approach-approach conflict, approach-avoidance conflict, and avoidance-avoidance conflict) described by Kurt Lewin (1935)and can be used to operationalize other Lewinian situations analyzed by his force field analysis. Another example of the use of superimposed schedules of reinforcement as an analytical tool is its application to the contingencies of rent control (Brechner, 2003). ## Concurrent schedules In operant conditioning, concurrent schedules of reinforcement are schedules of reinforcement that are simultaneously available to an animal subject or human participant, so that the subject or participant can respond on either schedule. For example, a pigeon in a Skinner box might be faced with two pecking keys; pecking responses can be made on either, and food reinforcement might follow a peck on either. The schedules of reinforcement arranged for pecks on the two keys can be different. They may be independent, or they may have some links between them so that behaviour on one key affects the likelihood of reinforcement on the other. It is not necessary for the responses on the two schedules to be physically distinct: in an alternative way of arranging concurrent schedules, introduced by Findley in 1958, both schedules are arranged on a single key or other response device, and the subject or participant can respond on a second key in order to change over between the schedules. In such a "Findley concurrent" procedure, a stimulus (e.g. the colour of the main key) is used to signal which schedule is currently in effect. Concurrent schedules often induce rapid alternation between the keys. To prevent this, a "changeover delay" is commonly introduced: each schedule is inactivated for a brief period after the subject switches to it. When both the concurrent schedules are variable intervals, a quantitative relationship known as the matching law is found between relative response rates in the two schedules and the relative reinforcement rates they deliver; this was first observed by R. J. Herrnstein in 1961. # Shaping Shaping involves reinforcing successive, increasingly accurate approximations of a response desired by a trainer. In training a rat to press a lever, for example, simply turning toward the lever will be reinforced at first. Then, only turning and stepping toward it will be reinforced. As training progresses, the response reinforced becomes progressively more like the desired behavior. # Chaining Chaining involves linking discrete behaviors together in a series, such that each result of each behaviour is both the reinforcement (or consequence) for the previous behavior, and the stimuli (or antecedent) for the next behavior. There are many ways to teach chaining, such as forward chaining (starting from the first behavior in the chain), backwards chaining (starting from the last behavior) and total task chaining (in which the entire behavior is taught from beginning to end, rather than as a series of steps). An example would be opening a locked door. First the key is inserted, then turned, then the door opened. Forward chaining would teach the subject first to insert the key. Once that task is mastered, they are told to insert the key, and taught to turn it. Once that task is mastered, they are told to perform the first two, then taught to open the door. Backwards chaining would involve the teacher first inserting and turning the key, and the subject is taught to open the door. Once that is learned, the teacher inserts the key, and the subject is taught to turn it, then opens the door as the next step. Finally, the subject is taught to insert the key, and they turn and open the door. Once the first step is mastered, the entire task has been taught. Total task chaining would involve teaching the entire task as a single series, prompting through all steps. Prompts are faded (reduced) at each step as they are mastered. # Criticisms The standard definition of behavioral reinforcement has been criticized as circular, since it appears to argue that response strength is increased by reinforcement while defining reinforcement as something which increases response strength; that is, the standard definition says only that response strength is increased by things which increase response strength. However, the correct usage of reinforer or reinforcement is that something is a reinforcer because of its effect on behavior, and not the other way around. It becomes circular if one says that a particular stimulus strengthens behavior because it is a reinforcer, and should not be used to explain why a stimulus is producing that effect on the behavior. Other definitions have been proposed, such as F. D. Sheffield's "consummatory behavior contingent on a response," but these are not broadly used in psychology. # History of the terms In the 1920s Russian physiologist Ivan Pavlov may have been the first to use the word reinforcement with respect to behavior, but (according to Dinsmoor) he used its approximate Russian cognate sparingly, and even then it referred to strengthening an already-learned but weakening response. He did not use it, as it is today, for selecting and strengthening new behavior. Pavlov's introduction of the word extinction (in Russian) approximates today's psychological use. In popular use, positive reinforcement is often used as a synonym for reward, with people (not behavior) thus being "reinforced," but this is contrary to the term's consistent technical usage, as it is a dimension of behavior, and not the person, which is strengthened. Negative reinforcement is often used by laypeople and even social scientists outside psychology as a synonym for punishment. This is contrary to modern technical use, but it was B. F. Skinner who first used it this way in his 1938 book. By 1953, however, he followed others in thus employing the word punishment, and he re-cast negative reinforcement for the removal of aversive stimuli. There are some within the field of behavior analysis who have suggested that the terms "positive" and "negative" constitute an unnecessary distinction in discussing reinforcement as it is often unclear whether stimuli are being removed or presented. For example, Iwata poses the question: “…is a change in temperature more accurately characterized by the presentation of cold (heat) or the removal of heat (cold)?” (p. 363). Thus, it may be best to conceptualize reinforcement simply as a pre-change condition being replaced by a post-change condition which reinforces the behavior which was followed by the change in stimulus conditions.
Reinforcement In operant conditioning, reinforcement is an increase in the strength of a response following the change in environment immediately following that response.[1] Response strength can be assessed by measures such as the frequency with which the response is made (for example, a pigeon may peck a key more times in the session), or the speed with which it is made (for example, a rat may run a maze faster). The environment change contingent upon the response is called a reinforcer. Reinforcement can only be confirmed retrospectively, as objects, items, food or other potential 'reinforcers' can only be called such by demonstrating increases in behavior after their administration. It is the strength of the response that is reinforced, not the organism. # Types of reinforcement B.F. Skinner, the researcher who articulated the major theoretical constructs of reinforcement and behaviorism, refused to specify causal origins of reinforcers. Skinner argued that reinforcers are defined by a change in response strength (that is, functionally rather than causally), and that what is a reinforcer to one person may not be to another. Accordingly, activities, foods or items which are generally considered pleasant or enjoyable may not necessarily be reinforcing; they can only be considered so if the behavior that immediately precedes the potential reinforcer increases in similar future situations. If a child receives a cookie when he or she asks for one, and the frequency of 'cookie-requesting behavior' increases, the cookie can be seen as reinforcing 'cookie-requesting behavior'. If however, cookie-requesting behavior does not increase, the cookie cannot be considered reinforcing. The sole criterion which can determine if an item, activity or food is reinforcing is the change in the probability of a behavior after the administration of a potential reinforcer. Other theories may focus on additional factors such as whether the person expected the strategy to work at some point, but a behavioral theory of reinforcement would focus specifically upon the probability of the behavior. The study of reinforcement has produced an enormous body of reproducible experimental results. Reinforcement is the central concept and procedure in the experimental analysis of behavior and much of quantitative analysis of behavior. - Positive reinforcement is an increase in the future frequency of a behavior due to the addition of a stimulus immediately following a response. Giving (or adding) food to a dog contingent on its sitting is an example of positive reinforcement (if this results in an increase in the future behavior of the dog sitting). - Negative reinforcement is an increase in the future frequency of a behavior when the consequence is the removal of an aversive stimulus. Turning off (or removing) an annoying song when a child asks their parent is an example of negative reinforcement (if this results in an increase in asking behavior of the child in the future). Avoidance conditioning is a form of negative reinforcement that occurs when a behavior prevents an aversive stimulus from starting or being applied. - Avoidance conditioning is a form of negative reinforcement that occurs when a behavior prevents an aversive stimulus from starting or being applied. Skinner discusses that while it may appear so, Punishment is not the opposite of reinforcement. Rather, it has some other effects as well as decreasing undesired behavior. Distinguishing "positive" from "negative" can be difficult, and the necessity of the distinction is often debated[2]. For example, in a very warm room, a current of external air serves as positive reinforcement because it is pleasantly cool or negative reinforcement because it removes uncomfortably hot air[3]. Some reinforcement can be simultaneously positive and negative, such as a drug addict taking drugs for the added euphoria and eliminating withdrawal symptoms. Many behavioral psychologists simply refer to reinforcement or punishment—without polarity—to cover all consequent environmental changes. ## Primary reinforcers A primary reinforcer, sometimes called an unconditioned reinforcer, is a stimulus that does not require pairing to function as a reinforcer and most likely has obtained this function through the evolution and its role in species' survival[4]. Examples of primary reinforcers include sleep, food, air, water, and sex. Other primary reinforcers, such as certain drugs, may mimic the effects of other primary reinforcers. While these primary reinforcers are fairly stable through life and across individuals, the reinforcing value of different primary reinforcers varies due to multiple factors (e.g., genetics, experience). Thus, one person may prefer one type of food while another abhors it. Or one person may eat lots of food while another eats very little. So even though food is a primary reinforcer for both individuals, the value of food as a reinforcer differs between them. Often primary reinforcers shift their reinforcing value temporarily through satiation and deprivation. Food, for example, may cease to be effective as a reinforcer after a certain amount of it has been consumed (satiation). After a period during which it does not receive any of the primary reinforcer (deprivation), however, the primary reinforcer may once again regain its effectiveness in increasing response strength. ## Secondary reinforcers A secondary reinforcer, sometimes called a conditioned reinforcer, is a stimulus or situation that has acquired its function as a reinforcer after pairing with a stimulus which functions as a reinforcer. This stimulus may be a primary reinforcer or another conditioned reinforcer (such as money). An example of a secondary reinforcer would be the sound from a clicker, as used in clicker training. The sound of the clicker has been associated with praise or treats, and subsequently, the sound of the clicker may function as a reinforcer. As with primary reinforcers, an organism can experience satiation and deprivation with secondary reinforcers. ## Other reinforcement terms - A generalized reinforcer is a conditioned reinforcer that has obtained the reinforcing function by pairing with many other reinforcers (such as money, a secondary generalized reinforcer). - In reinforcer sampling a potentially reinforcing but unfamiliar stimulus is presented to an organism without regard to any prior behavior. The stimulus may then later be used more effectively in reinforcement. - Socially mediated reinforcement (direct reinforcement) involves the delivery of reinforcement which requires the behavior of another organism. - Premack principle is a special case of reinforcement elaborated by David Premack, which states that a highly preferred activity can be used effectively as a reinforcer for a less preferred activity. - Reinforcement hierarchy is a list of actions, rank-ordering the most desirable to least desirable consequences that may serve as a reinforcer. A reinforcement hierarchy can be used to determine the relative frequency and desirability of different activities, and is often employed when applying the Premack principle.[citation needed] - Contingent outcomes are more likely to reinforce behavior than non-contingent responses. Contingent outcomes are those directly linked to a causal behavior, such a light turning on being contingent on flipping a switch. Note that contingent outcomes are not necessary to demonstrate reinforcement, but perceived contingency may increase learning. - Contiguous stimuli are stimuli closely associated by time and space with specific behaviors. They reduce the amount of time needed to learn a behavior while increasing its resistance to extinction. Giving a dog a piece of food immediately after sitting is more contiguous with (and therefore more likely to reinforce) the behavior than a several minute delay in food delivery following the behavior. - Noncontingent reinforcement refers to response-independent delivery of stimuli identified serve as reinforcers for some behaviors of that organism. However, this typically entails time-based delivery of stimuli identified as maintaining aberrant behavior, which serves to decrease the rate of the target behavior[5]. As no measured behavior is identified as being strengthened, there is controversy surrounding the use of the term noncontingent "reinforcement".[6] # Natural and artificial reinforcement In his 1967 paper, Arbitrary and Natural Reinforcement, Charles Ferster proposed that reinforcement can be classified into events which increase the frequency of an operant as a natural consequence of the behavior itself, and those which are presumed to affect frequency by their requirement of human mediation, such as in a token economy where subjects are "rewarded" for certain behavior with an arbitrary token of a negotiable value. In 1970, Baer and Wolf created a name for the use of natural reinforcers called behavior traps.[7] A behavior trap is one in which only a simple response is necessary to enter the trap, yet once entered, the trap cannot be resisted in creating general behavior change. It is the use of a behavioral trap that will increase one's repertoire by exposing a person to the naturally occurring reinforcement of that behavior. Behavior traps have four characteristics: - They are "baited" with virtually irresistible reinforcers that "lure" the student to the trap - Only a low-effort response already in the repertoire is necessary to enter the trap - Interrelated contingencies of reinforcement inside the trap motivate the person to acquire, extend, and maintain targeted academic/social skills[8] - they can remain effective for long time because the person shows few, if any, satiation effects. As can be seen from the above, artificial reinforcement is created to build or develop skills, and to generalize, it is important that either a behavior trap is introduced to 'capture' the skill and utilize naturally occurring reinforcement to maintain or increase it. This behavior trap may simply be a social situation that will generally result from a specific behavior once it has met a certain criterion (ex: if you use edible reinforcers to train a person to say hello and smile at people when they meet them, after that skill has been built up, the natural reinforcer of other people smiling, and having more friendly interactions will naturally reinforce the skill and the edibles can be faded).[9] # Schedules of reinforcement When an animal's surroundings are controlled, its behavior patterns after reinforcement become predictable, even for very complex behavior patterns. A schedule of reinforcement is the protocol for determining when responses or behaviors will be reinforced, ranging from continuous reinforcement, in which every response is reinforced, and extinction, in which no response is reinforced. Between these extremes is intermittent or partial reinforcement where only some responses are reinforced. Specific variations of intermittent reinforcement reliably induce specific patterns of response, irrespective of the species being investigated (including humans in some conditions). The orderliness and predictability of behaviour under schedules of reinforcement was evidence for B. F. Skinner's claim that using operant conditioning he could obtain "control over behaviour", in a way that rendered the theoretical disputes of contemporary comparative psychology obsolete. The reliability of schedule control supported the idea that a radical behaviourist experimental analysis of behavior could be the foundation for a psychology that did not refer to mental or cognitive processes. The reliability of schedules also led to the development of Applied Behavior Analysis as a means of controlling or altering behavior. Many of the simpler possibilities, and some of the more complex ones, were investigated at great length by Skinner using pigeons, but new schedules continue to be defined and investigated. ## Simple schedules Simple schedules have a single rule to determine when a single type of reinforcer is delivered for specific response. - Fixed ratio (FR) schedules deliver reinforcement after every nth response Example: FR2 = every second response is reinforced Lab example: FR5 = rat reinforced with food after each 5 bar-presses in a Skinner box. Real-world example: FR10 = Used car dealer gets a $1000 bonus for each 10 cars sold on the lot. - Example: FR2 = every second response is reinforced - Lab example: FR5 = rat reinforced with food after each 5 bar-presses in a Skinner box. - Real-world example: FR10 = Used car dealer gets a $1000 bonus for each 10 cars sold on the lot. - Continuous ratio (CRF) schedules are a special form of a fixed ratio. In a continuous ratio schedule, reinforcement follows each and every response. Lab example: each time a rat presses a bar it gets a pellet of food Real world example: each time a dog defecates outside its owner gives it a treat - Lab example: each time a rat presses a bar it gets a pellet of food - Real world example: each time a dog defecates outside its owner gives it a treat - Fixed interval (FI) schedules deliver reinforcement for the first response after a fixed length of time since the last reinforcement, while premature responses are not reinforced. Example: FI1" = reinforcement provided for the first response after 1 second Lab example: FI15" = rat is reinforced for the first bar press after 15 seconds passes since the last reinforcement Real world example: FI24 hour = calling a radio station is reinforced with a chance to win a prize, but the person can only sign up once per day - Example: FI1" = reinforcement provided for the first response after 1 second - Lab example: FI15" = rat is reinforced for the first bar press after 15 seconds passes since the last reinforcement - Real world example: FI24 hour = calling a radio station is reinforced with a chance to win a prize, but the person can only sign up once per day - Variable ratio (VR) schedules deliver reinforcement after a random number of responses (based upon a predetermined average) Example: VR3 = on average, every third response is reinforced Lab example: VR10 = on average, a rat is reinforced for each 10 bar presses Real world example: VR37 = a roulette player betting on specific numbers will win on average one every 37 tries (on a U.S. roulette wheel, this would be VR38) - Example: VR3 = on average, every third response is reinforced - Lab example: VR10 = on average, a rat is reinforced for each 10 bar presses - Real world example: VR37 = a roulette player betting on specific numbers will win on average one every 37 tries (on a U.S. roulette wheel, this would be VR38) - Variable interval (VI) schedules deliver reinforcement for the first response after a random average length of time passes since the last reinforcement Example: VI3" = reinforcement is provided for the first response after an average of 3 seconds since the last reinforcement. Lab example: VI10" = a rat is reinforced for the first bar press after an average of 10 seconds passes since the last reinforcement Real world example: a predator can expect to come across a prey on a variable interval schedule - Example: VI3" = reinforcement is provided for the first response after an average of 3 seconds since the last reinforcement. - Lab example: VI10" = a rat is reinforced for the first bar press after an average of 10 seconds passes since the last reinforcement - Real world example: a predator can expect to come across a prey on a variable interval schedule Other simple schedules include: - Differential reinforcement of incompatible behavior (DRI) is used to reduce a frequent behavior without punishing it by reinforcing an incompatible response. An example would be reinforcing clapping to reduce nose picking. - Differential reinforcement of other behavior (DRO) is used to reduce a frequent behavior by reinforcing any behavior other than the undesired one. An example would be reinforcing any hand action other than nose picking. - Differential reinforcement of low response rate (DRL) is used to encourage low rates of responding. It is like an interval schedule, except that premature responses reset the time required between behavior. Lab example: DRL10" = a rat is reinforced for the first response after 10 seconds, but if the rat responds earlier than 10 seconds there is no reinforcement and the rat has to wait 10 seconds from that premature response without another response before bar pressing will lead to reinforcement. Real world example: "If you ask me for a potato chip no more than once every 10 minutes, I will give it to you. If you ask more often, I will give you none." - Lab example: DRL10" = a rat is reinforced for the first response after 10 seconds, but if the rat responds earlier than 10 seconds there is no reinforcement and the rat has to wait 10 seconds from that premature response without another response before bar pressing will lead to reinforcement. - Real world example: "If you ask me for a potato chip no more than once every 10 minutes, I will give it to you. If you ask more often, I will give you none." - Differential reinforcement of high rate (DRH) is used to increase high rates of responding. It is like an interval schedule, except that a minimum number of responses are required in the interval in order to receive reinforcement. Lab example: DRH10"/15 responses = a rat must press a bar 15 times within a 10 second increment in order to be reinforced Real world example: "If Lance Armstrong is going to win the Tour de France he has to pedal x number of times during the y hour race." - Lab example: DRH10"/15 responses = a rat must press a bar 15 times within a 10 second increment in order to be reinforced - Real world example: "If Lance Armstrong is going to win the Tour de France he has to pedal x number of times during the y hour race." - Fixed Time (FT) provides reinforcement at a fixed time since the last reinforcement, irrespective of whether the subject has responded or not. In other words, it is a non-contingent schedule - Lab example: FT5": rat gets food every 5" regardless of the behavior. Real world example: a person gets an annuity check every month regardless of behavior between checks - Lab example: FT5": rat gets food every 5" regardless of the behavior. - Real world example: a person gets an annuity check every month regardless of behavior between checks - Variable Time (VT) provides reinforcement at an average variable time since last reinforcement, regardless of whether the subject has responded or not. ### Effects of different types of simple schedules - Ratio schedules produce higher rates of responding than interval schedules, when the rates of reinforcement are otherwise similar. - Variable schedules produce higher rates and greater resistance to extinction than most fixed schedules. This is also known as the Partial Reinforcement Extinction Effect (PREE) - The variable ratio schedule produces both the highest rate of responding and the greatest resistance to extinction (an example would be the behavior of gamblers at slot machines) - Fixed schedules produce 'post-reinforcement pauses' (PRP), where responses will briefly cease immediately following reinforcement, though the pause is a function of the upcoming response requirement rather than the prior reinforcement. The PRP of a fixed interval schedule is frequently followed by an accelerating rate of response which is "scallop shaped," while those of fixed ratio schedules are more angular. - The PRP of a fixed interval schedule is frequently followed by an accelerating rate of response which is "scallop shaped," while those of fixed ratio schedules are more angular. - Organisms whose schedules of reinforcement are 'thinned' (that is, requiring more responses or a greater wait before reinforcement) may experience 'ratio strain' if thinned too quickly. This produces behavior similar to that seen during extinction. - Partial reinforcement schedules are more resistant to extinction than continuous reinforcement schedules. Ratio schedules are more resistant than interval schedules and variable schedules more resistant than fixed ones. - Ratio schedules are more resistant than interval schedules and variable schedules more resistant than fixed ones. ## Compound schedules Compound schedules combine two or more different simple schedules in some way using the same reinforcer for the same behaviour. There are many possibilities; among those most often used are: - Alternative schedules - A type of compound schedule where two or more simple schedules are in effect and which ever simple schedule is completed first results in reinforcement. [10] - Conjunctive schedules - A complex schedule of reinforcement where two or more simple schedules are in effect independently of each other and requirements on all of the simple schedules must be met for reinforcement. - Multiple schedules - either of two, or more, schedules may occur with a stimulus indicating which is in force. Example: FR4 when given a whistle and FI 6 when given a bell ring. - Example: FR4 when given a whistle and FI 6 when given a bell ring. - Mixed schedules - either of two, or more, schedules may occur with no stimulus indicating which is in force. Example: FI6 and then VR 3 without any stimulus warning of the change in schedule. - Example: FI6 and then VR 3 without any stimulus warning of the change in schedule. - Concurrent schedules - two schedules are simultaneously in force though not necessarily on two different response devices, and reinforcement on those schedules is independent of each other. - Interlocking Schedules - A single schedule with two components where progress in one component affects progress in the other component. An interlocking FR60-FI120, for example, each response subtracts time from the interval component such that each response is "equal" to removing two seconds from the FI. - Chained schedules - reinforcement occurs after two or more successive schedules have been completed, with a stimulus indicating when one schedule has been completed and the next has started. Example: FR10 in a green light when completed it goes to a yellow light to indicate FR 3, after it's completed it goes into red light to indicate VI 6, etc. At the end of the chain, a reinforcer is given. - Example: FR10 in a green light when completed it goes to a yellow light to indicate FR 3, after it's completed it goes into red light to indicate VI 6, etc. At the end of the chain, a reinforcer is given. - Tandem schedules - reinforcement occurs when two or more successive schedule requirements have been completed, with no stimulus indicating when a schedule has been completed and the next has started. Example: VR 10, after it is completed the schedule is changed without warning to FR 10, after that it is changed without warning to FR 16, etc. At the end of the series of schedules, a reinforcer is finally given. - Example: VR 10, after it is completed the schedule is changed without warning to FR 10, after that it is changed without warning to FR 16, etc. At the end of the series of schedules, a reinforcer is finally given. - Higher order schedules - completion of one schedule is reinforced according to a second schedule; e.g. in FR2 (FI 10 secs), two successive fixed interval schedules would have to be completed before a response is reinforced. ## Superimposed schedules Superimposed schedules of reinforcement is a term in psychology which refers to a structure of rewards where two or more simple schedules of reinforcement operate simultaneously. The reinforcers can be positive and/or negative. An example would be a person who comes home after a long day at work. The behavior of opening the front door is rewarded by a big kiss on lips by the person's spouse and a rip in the pants from the family dog jumping enthusiastically. Another example of superimposed schedules of reinforcement would be a pigeon in an experimental cage pecking at a button. The pecks result in a hopper of grain being delivered every twentieth peck and access to water becoming available after every two hundred pecks. Superimposed schedules of reinforcement are a type of compound schedule that evolved from the initial work on simple schedules of reinforcement by B. F. Skinner and his colleagues (Skinner and Ferster, 1957). They demonstrated that reinforcers could be delivered on schedules, and further that organisms behaved differently under different schedules. Rather than a reinforcer, such as food or water, being delivered every time as a consequence of some behavior, a reinforcer could be delivered after more than one instance of the behavior. For example, a pigeon may be required to peck a button switch ten times before food is made available to the pigeon. This is called a "ratio schedule." Also, a reinforcer could be delivered after an interval of time passed following a target behavior. An example is a rat that is given a food pellet two minutes after the rat pressed a lever. This is called an "interval schedule." In addition, ratio schedules can deliver reinforcement following fixed or variable number of behaviors by the individual organism. Likewise, interval schedules can deliver reinforcement following fixed or variable intervals of time following a single response by the organism. Individual behaviors tend to generate response rates that differ based upon how the reinforcement schedule is created. Much subsequent research in many labs examined the effects on behaviors of scheduling reinforcers. If an organism is offered the opportunity to choose between or among two or more simple schedules of reinforcement at the same time, the reinforcement structure is called a "concurrent schedule of reinforcement." Brechner (1974, 1977) introduced the concept of "superimposed schedules of reinforcement in an attempt to create a laboratory analogy of social traps, such as when humans overharvest their fisheries or tear down their rainforests. Brechner created a situation where simple reinforcement schedules were superimposed upon each other. In other words, a single response or group of responses by an organism led to multiple consequences. Concurrent schedules of reinforcement can be thought of as "or" schedules, and superimposed schedules of reinforcement can be thought of as "and" schedules. Brechner and Linder (1981) and Brechner (1987) expanded the concept to describe how superimposed schedules and the social trap analogy could be used to analyze the way energy flows through systems. Superimposed schedules of reinforcement have many real-world applications in addition to generating social traps. Many different human individual and social situations can be created by superimposing simple reinforcement schedules. For example a human being could have simultaneous tobacco and alcohol addictions. Even more complex situations can be created or simulated by superimposing two or more concurrent schedules. For example, a high school senior could have a choice between going to Stanford University or UCLA, and at the same time have the choice of going into the Army or the Air Force, and simultaneously the choice of taking a job with an internet company or a job with a software company. That would be a reinforcement structure of three superimposed concurrent schedules of reinforcement. Superimposed schedules of reinforcement can be used to create the three classic conflict situations (approach-approach conflict, approach-avoidance conflict, and avoidance-avoidance conflict) described by Kurt Lewin (1935)and can be used to operationalize other Lewinian situations analyzed by his force field analysis. Another example of the use of superimposed schedules of reinforcement as an analytical tool is its application to the contingencies of rent control (Brechner, 2003). ## Concurrent schedules In operant conditioning, concurrent schedules of reinforcement are schedules of reinforcement that are simultaneously available to an animal subject or human participant, so that the subject or participant can respond on either schedule. For example, a pigeon in a Skinner box might be faced with two pecking keys; pecking responses can be made on either, and food reinforcement might follow a peck on either. The schedules of reinforcement arranged for pecks on the two keys can be different. They may be independent, or they may have some links between them so that behaviour on one key affects the likelihood of reinforcement on the other. It is not necessary for the responses on the two schedules to be physically distinct: in an alternative way of arranging concurrent schedules, introduced by Findley in 1958, both schedules are arranged on a single key or other response device, and the subject or participant can respond on a second key in order to change over between the schedules. In such a "Findley concurrent" procedure, a stimulus (e.g. the colour of the main key) is used to signal which schedule is currently in effect. Concurrent schedules often induce rapid alternation between the keys. To prevent this, a "changeover delay" is commonly introduced: each schedule is inactivated for a brief period after the subject switches to it. When both the concurrent schedules are variable intervals, a quantitative relationship known as the matching law is found between relative response rates in the two schedules and the relative reinforcement rates they deliver; this was first observed by R. J. Herrnstein in 1961. # Shaping Template:Mainarticle Shaping involves reinforcing successive, increasingly accurate approximations of a response desired by a trainer. In training a rat to press a lever, for example, simply turning toward the lever will be reinforced at first. Then, only turning and stepping toward it will be reinforced. As training progresses, the response reinforced becomes progressively more like the desired behavior. # Chaining Template:Mainarticle Chaining involves linking discrete behaviors together in a series, such that each result of each behaviour is both the reinforcement (or consequence) for the previous behavior, and the stimuli (or antecedent) for the next behavior. There are many ways to teach chaining, such as forward chaining (starting from the first behavior in the chain), backwards chaining (starting from the last behavior) and total task chaining (in which the entire behavior is taught from beginning to end, rather than as a series of steps). An example would be opening a locked door. First the key is inserted, then turned, then the door opened. Forward chaining would teach the subject first to insert the key. Once that task is mastered, they are told to insert the key, and taught to turn it. Once that task is mastered, they are told to perform the first two, then taught to open the door. Backwards chaining would involve the teacher first inserting and turning the key, and the subject is taught to open the door. Once that is learned, the teacher inserts the key, and the subject is taught to turn it, then opens the door as the next step. Finally, the subject is taught to insert the key, and they turn and open the door. Once the first step is mastered, the entire task has been taught. Total task chaining would involve teaching the entire task as a single series, prompting through all steps. Prompts are faded (reduced) at each step as they are mastered. # Criticisms The standard definition of behavioral reinforcement has been criticized as circular, since it appears to argue that response strength is increased by reinforcement while defining reinforcement as something which increases response strength; that is, the standard definition says only that response strength is increased by things which increase response strength. However, the correct usage[11] of reinforer or reinforcement is that something is a reinforcer because of its effect on behavior, and not the other way around. It becomes circular if one says that a particular stimulus strengthens behavior because it is a reinforcer, and should not be used to explain why a stimulus is producing that effect on the behavior. Other definitions have been proposed, such as F. D. Sheffield's "consummatory behavior contingent on a response," but these are not broadly used in psychology.[12] # History of the terms In the 1920s Russian physiologist Ivan Pavlov may have been the first to use the word reinforcement with respect to behavior, but (according to Dinsmoor) he used its approximate Russian cognate sparingly, and even then it referred to strengthening an already-learned but weakening response. He did not use it, as it is today, for selecting and strengthening new behavior. Pavlov's introduction of the word extinction (in Russian) approximates today's psychological use. In popular use, positive reinforcement is often used as a synonym for reward, with people (not behavior) thus being "reinforced," but this is contrary to the term's consistent technical usage, as it is a dimension of behavior, and not the person, which is strengthened. Negative reinforcement is often used by laypeople and even social scientists outside psychology as a synonym for punishment. This is contrary to modern technical use, but it was B. F. Skinner who first used it this way in his 1938 book. By 1953, however, he followed others in thus employing the word punishment, and he re-cast negative reinforcement for the removal of aversive stimuli. There are some within the field of behavior analysis[13] who have suggested that the terms "positive" and "negative" constitute an unnecessary distinction in discussing reinforcement as it is often unclear whether stimuli are being removed or presented. For example, Iwata[14] poses the question: “…is a change in temperature more accurately characterized by the presentation of cold (heat) or the removal of heat (cold)?” (p. 363). Thus, it may be best to conceptualize reinforcement simply as a pre-change condition being replaced by a post-change condition which reinforces the behavior which was followed by the change in stimulus conditions.
https://www.wikidoc.org/index.php/Reinforcement
b72b394c67ff21d0ab98d53eb25e29cb45aeb50f
wikidoc
Relative risk
Relative risk In statistics and mathematical epidemiology, relative risk (RR) is the risk of an event (or of developing a disease) relative to exposure. Relative risk is a ratio of the probability of the event occurring in the exposed group versus the control (non-exposed) group. For example, if the probability of developing lung cancer among smokers was 20% and among non-smokers 1%, then the relative risk of cancer associated with smoking would be 20. Smokers would be twenty times as likely as non-smokers to develop lung cancer. # Statistical use and meaning Relative risk is used frequently in the statistical analysis of binary outcomes where the outcome of interest has relatively low probability. It is thus often suited to clinical trial data, where it is used to compare the risk of developing a disease, in people not receiving the new medical treatment (or receiving a placebo) versus people who are receiving an established (standard of care) treatment. Alternatively, it is used to compare the risk of developing a side effect in people receiving a drug as compared to the people who are not receiving the treatment (or receiving a placebo). It is particularly attractive because it can be calculated by hand in the simple case, but is also susceptible to regression modelling, typically in a Poisson regression framework. In a simple comparison between an experimental group and a control group: - A relative risk of 1 means there is no difference in risk between the two groups. - A RR of < 1 means the event is less likely to occur in the experimental group than in the control group. - A RR of > 1 means the event is more likely to occur in the experimental group than in the control group. As a consequence of the Delta method, the log of the relative risk has a sampling distribution that is approximately normal with variance that can be estimated by a formula involving the number of subjects in each group and the event rates in each group (see Delta method) . This permits the construction of a confidence interval (CI) which is symmetric around \log(RR), i.e. where z_\alpha is the standard score for the chosen level of significance and SE the standard error. The antilog can be taken of the two bounds of the log-CI, giving the high and low bounds for an asymmetric confidence interval around the relative risk. In regression models, the treatment is typically included as a dummy variable along with other factors that may affect risk. The relative risk is normally reported as calculated for the mean of the sample values of the explanatory variables. ## Association with odds ratio Relative risk is different from the odds ratio, although it asymptotically approaches it for small probabilities. In fact, the odds ratio has much wider use in statistics, since logistic regression, often associated with clinical trials, works with the log of the odds ratio, not relative risk. Because the log of the odds ratio is estimated as a linear function of the explanatory variables, the estimated odds ratio for 70-year-olds and 60-year-olds associated with type of treatment would be the same in a logistic regression models where the outcome is associated with drug and age, although the relative risk might be significantly different. In cases like this, statistical models of the odds ratio often reflect the underlying mechanisms more effectively. Since relative risk is a more intuitive measure of effectiveness, the distinction is important especially in cases of medium to high probabilities. If action A carries a risk of 99.9% and action B a risk of 99.0% then the relative risk is just over 1, while the odds associated with action A are almost 10 times higher than the odds with B. In medical research, the odds ratio is favoured for case-control studies and retrospective studies. Relative risk is used in randomized controlled trials and cohort studies. In statistical modelling, approaches like poisson regression (for counts of events per unit exposure) have relative risk interpretations: the estimated effect of an explanatory variable is multiplicative on the rate, and thus leads to a risk ratio or relative risk. Logistic regression (for binary outcomes, or counts of successes out of a number of trials) must be interpreted in odds-ratio terms: the effect of an explanatory variable is multiplicative on the odds and thus leads to an odds ratio. # Size of relative risk and relevance In the standard or classical hypothesis testing framework, the null hypothesis is that RR=1 (the putative risk factor has no effect). The null hypothesis can be rejected in favor of the alternative hypothesis of that the factor in question does affect risk if the confidence interval for RR excludes 1. Critics of the standard approach, notably including John Brignell and Steven Milloy, believe published studies suffer from unduly high type I error rates, and have argued for an additional requirement that the point estimate of RR should exceed 2 (or, if risks are reduced, be below 0.5) and have cited a variety of statements by statisticians and others supporting this view. The issue has arisen particularly in relation to debates about the effects of passive smoking, where the effect size appears to be small (relative to smoking), and exposure levels are difficult to quantify in the affected population. In support of this claim, it may be observed that, if the base level of risk is low, a small proportionate increase in risk may be of little practical significance. (In the case of lung cancer, however, the base risk is substantial). In addition, if estimates are biased by the exclusion of relevant factors, the likelihood of a spurious finding of significance is greater if the estimated RR is close to 1. In his paper "Why Most Published Research Findings Are False" John Ioannidis writes, "The smaller the effect sizes in a scientific field, the less likely the research findings are to be true. research findings are more likely true in scientific fields with relative risks 3–20 , than in scientific fields where postulated effects are small (relative risks 1.1–1.5)." "if the majority of true genetic or nutritional determinants of complex diseases confer relative risks less than 1.05, genetic or nutritional epidemiology would be largely utopian endeavors." In assessing results claiming an increase of relative risk arising from exposure to a hazard, statisticians and epidemiologists consider a range of factors including the size of the effect, the level of statistical significance, whether the results arise from a clinical trial or observation of a population, the significance of possible confounding factors , the extent to which results have been replicated, and the presence or absence of a biomedical model for the claimed effect. Important confounding factors for observational studies of health risks include tobacco smoking and social class. While few statisticians accept the general claim that a relative risk level greater than 2 is required before a finding of increased risk can be accepted, most agree with this view in relation to findings from single studies without biomedical support. Marcia Angell of the New England Journal of Medicine has stated The arguments of Milloy, Brignell and others, put forward in relation to passive smoking, have been criticised by epidemiologists. Their approach to epidemiology, involving efforts to discredit individual studies rather than addressing the evidence as a whole, was described in the American Journal of Public Health: A major component of the industry attack was the mounting of a campaign to establish a "bar" for "sound science" that could not be fully met by most individual investigations, leaving studies that did not meet the criteria to be dismissed as "junk science." The campaign also included attempts to characterize relative risks of 2 or less as highly questionable and not amenable to investigation by epidemiologic methods. These efforts were largely abandoned by the tobacco industry when it became clear that no independent epidemiological organization would agree to the standards proposed by Philip Morris et al. ## Statistical significance (confidence) and relative risk Whether a given relative risk can be considered statistically significant is dependent on the relative difference between the conditions compared, the amount of measurement and the noise associated with the measurement (of the events considered). In other words, the confidence one has, in a given relative risk being non-random (i.e. it is not a consequence of chance), depends on the signal-to-noise ratio and the sample size. Expressed mathematically, the confidence that a result is not by random chance is given by the following formula by Sackett: confidence = \frac{signal}{noise} \times \sqrt{sample\ size} For clarity, the above formula is presented in tabular form below. Dependence of confidence with noise, signal and sample size (tabular form) In words, the confidence is higher if the noise is lower and/or the sample size is larger and/or the effect size (signal) is increased. The confidence of a relative risk value (and its associated confidence interval) is not dependent on effect size alone. If the sample size is large and the noise is low a small effect size can be measured with great confidence. Whether a small effect size is considered important is dependent on the context of the events compared. In medicine, small effect sizes (reflected by small relative risk values) are usually considered clinically relevant (if there is great confidence in them) and are frequently used to guide treatment decisions. A relative risk of 1.10 may seem very small, but over a large number of patients will make a noticeable difference. Whether a given treatment is considered a worthy endeavour is dependent on the risks, benefits and costs.
Relative risk In statistics and mathematical epidemiology, relative risk (RR) is the risk of an event (or of developing a disease) relative to exposure. Relative risk is a ratio of the probability of the event occurring in the exposed group versus the control (non-exposed) group. For example, if the probability of developing lung cancer among smokers was 20% and among non-smokers 1%, then the relative risk of cancer associated with smoking would be 20. Smokers would be twenty times as likely as non-smokers to develop lung cancer. # Statistical use and meaning Relative risk is used frequently in the statistical analysis of binary outcomes where the outcome of interest has relatively low probability. It is thus often suited to clinical trial data, where it is used to compare the risk of developing a disease, in people not receiving the new medical treatment (or receiving a placebo) versus people who are receiving an established (standard of care) treatment. Alternatively, it is used to compare the risk of developing a side effect in people receiving a drug as compared to the people who are not receiving the treatment (or receiving a placebo). It is particularly attractive because it can be calculated by hand in the simple case, but is also susceptible to regression modelling, typically in a Poisson regression framework. In a simple comparison between an experimental group and a control group: - A relative risk of 1 means there is no difference in risk between the two groups. - A RR of < 1 means the event is less likely to occur in the experimental group than in the control group. - A RR of > 1 means the event is more likely to occur in the experimental group than in the control group. As a consequence of the Delta method, the log of the relative risk has a sampling distribution that is approximately normal with variance that can be estimated by a formula involving the number of subjects in each group and the event rates in each group (see Delta method) [1]. This permits the construction of a confidence interval (CI) which is symmetric around <math>\log(RR)</math>, i.e. where <math>z_\alpha</math> is the standard score for the chosen level of significance and SE the standard error. The antilog can be taken of the two bounds of the log-CI, giving the high and low bounds for an asymmetric confidence interval around the relative risk. In regression models, the treatment is typically included as a dummy variable along with other factors that may affect risk. The relative risk is normally reported as calculated for the mean of the sample values of the explanatory variables. ## Association with odds ratio Relative risk is different from the odds ratio, although it asymptotically approaches it for small probabilities. In fact, the odds ratio has much wider use in statistics, since logistic regression, often associated with clinical trials, works with the log of the odds ratio, not relative risk. Because the log of the odds ratio is estimated as a linear function of the explanatory variables, the estimated odds ratio for 70-year-olds and 60-year-olds associated with type of treatment would be the same in a logistic regression models where the outcome is associated with drug and age, although the relative risk might be significantly different. In cases like this, statistical models of the odds ratio often reflect the underlying mechanisms more effectively. Since relative risk is a more intuitive measure of effectiveness, the distinction is important especially in cases of medium to high probabilities. If action A carries a risk of 99.9% and action B a risk of 99.0% then the relative risk is just over 1, while the odds associated with action A are almost 10 times higher than the odds with B. In medical research, the odds ratio is favoured for case-control studies and retrospective studies. Relative risk is used in randomized controlled trials and cohort studies.[2] In statistical modelling, approaches like poisson regression (for counts of events per unit exposure) have relative risk interpretations: the estimated effect of an explanatory variable is multiplicative on the rate, and thus leads to a risk ratio or relative risk. Logistic regression (for binary outcomes, or counts of successes out of a number of trials) must be interpreted in odds-ratio terms: the effect of an explanatory variable is multiplicative on the odds and thus leads to an odds ratio. # Size of relative risk and relevance In the standard or classical hypothesis testing framework, the null hypothesis is that RR=1 (the putative risk factor has no effect). The null hypothesis can be rejected in favor of the alternative hypothesis of that the factor in question does affect risk if the confidence interval for RR excludes 1. Critics of the standard approach, notably including John Brignell and Steven Milloy, believe published studies suffer from unduly high type I error rates, and have argued for an additional requirement that the point estimate of RR should exceed 2 [1] [2] [3] (or, if risks are reduced, be below 0.5) and have cited a variety of statements by statisticians and others supporting this view. The issue has arisen particularly in relation to debates about the effects of passive smoking, where the effect size appears to be small (relative to smoking), and exposure levels are difficult to quantify in the affected population. In support of this claim, it may be observed that, if the base level of risk is low, a small proportionate increase in risk may be of little practical significance. (In the case of lung cancer, however, the base risk is substantial). In addition, if estimates are biased by the exclusion of relevant factors, the likelihood of a spurious finding of significance is greater if the estimated RR is close to 1. In his paper "Why Most Published Research Findings Are False" John Ioannidis writes,[3] "The smaller the effect sizes in a scientific field, the less likely the research findings are to be true. [...] research findings are more likely true in scientific fields with [...] relative risks 3–20 [...], than in scientific fields where postulated effects are small [...] (relative risks 1.1–1.5)." "if the majority of true genetic or nutritional determinants of complex diseases confer relative risks less than 1.05, genetic or nutritional epidemiology would be largely utopian endeavors." In assessing results claiming an increase of relative risk arising from exposure to a hazard, statisticians and epidemiologists consider a range of factors including the size of the effect, the level of statistical significance, whether the results arise from a clinical trial or observation of a population, the significance of possible confounding factors , the extent to which results have been replicated, and the presence or absence of a biomedical model for the claimed effect. Important confounding factors for observational studies of health risks include tobacco smoking and social class. While few statisticians accept the general claim that a relative risk level greater than 2 is required before a finding of increased risk can be accepted, most agree with this view in relation to findings from single studies without biomedical support. Marcia Angell of the New England Journal of Medicine has stated The arguments of Milloy, Brignell and others, put forward in relation to passive smoking, have been criticised by epidemiologists. Their approach to epidemiology, involving efforts to discredit individual studies rather than addressing the evidence as a whole, was described in the American Journal of Public Health: A major component of the industry attack was the mounting of a campaign to establish a "bar" for "sound science" that could not be fully met by most individual investigations, leaving studies that did not meet the criteria to be dismissed as "junk science." The campaign also included attempts to characterize relative risks of 2 or less as highly questionable and not amenable to investigation by epidemiologic methods.[4] These efforts were largely abandoned by the tobacco industry when it became clear that no independent epidemiological organization would agree to the standards proposed by Philip Morris et al.[5] ## Statistical significance (confidence) and relative risk Whether a given relative risk can be considered statistically significant is dependent on the relative difference between the conditions compared, the amount of measurement and the noise associated with the measurement (of the events considered). In other words, the confidence one has, in a given relative risk being non-random (i.e. it is not a consequence of chance), depends on the signal-to-noise ratio and the sample size. Expressed mathematically, the confidence that a result is not by random chance is given by the following formula by Sackett[6]: <math>confidence = \frac{signal}{noise} \times \sqrt{sample\ size}</math> For clarity, the above formula is presented in tabular form below. Dependence of confidence with noise, signal and sample size (tabular form) In words, the confidence is higher if the noise is lower and/or the sample size is larger and/or the effect size (signal) is increased. The confidence of a relative risk value (and its associated confidence interval) is not dependent on effect size alone. If the sample size is large and the noise is low a small effect size can be measured with great confidence. Whether a small effect size is considered important is dependent on the context of the events compared. In medicine, small effect sizes (reflected by small relative risk values) are usually considered clinically relevant (if there is great confidence in them) and are frequently used to guide treatment decisions. A relative risk of 1.10 may seem very small, but over a large number of patients will make a noticeable difference. Whether a given treatment is considered a worthy endeavour is dependent on the risks, benefits and costs.
https://www.wikidoc.org/index.php/Relative_risk