category
stringclasses
191 values
search_query
stringclasses
434 values
search_type
stringclasses
2 values
search_engine_input
stringclasses
748 values
url
stringlengths
22
468
title
stringlengths
1
77
text_raw
stringlengths
1.17k
459k
text_window
stringlengths
545
2.63k
stance
stringclasses
2 values
Veterinary Science
Are vegan diets healthy for dogs?
no_statement
"vegan" "diets" are not "healthy" for "dogs".. "dogs" should not be fed a "vegan" "diet" for optimal health.
https://www.winchester.ac.uk/news-and-events/press-centre/media-articles/vegan-diets-may-be-the-healthiest-to-feed-pet-dogs-say-researchers.php
Media Articles - Vegan diets may be the healthiest to feed pet dogs ...
Vegan diets may be the healthiest to feed pet dogs, say researchers Nutritionally-sound vegan diets are the healthiest and least hazardous choices for owners to feed their pet dogs, according to the authors of a new research study. The study of 2,639 dogs and their owners, led by the University of Winchester, is among the first large-scale studies to explore how health outcomes vary between dogs fed meat-based or vegan diets. For the research, dog owners provided information about one dog which was fed either a conventional meat, raw meat or vegan diet for at least one year. The researchers looked at seven general indicators of ill health in dogs - including unusually high numbers of visits to the vet; whether the dog took medication; and the percentage of unwell dogs - and 22 of the most common canine health disorders. Dog owners were asked to report their own opinion of their dog's health and also what they believed their vet's assessment to be. The findings show that, considering all seven general indicators of health, dogs fed conventional meat diets appeared to be less healthy than those fed either a raw meat or a vegan diet. They had poorer health indicators in almost all cases. Dogs fed raw meat diets appeared to fare marginally better than those fed vegan diets. However, the effect sizes were statistically small, in every case. Additionally, the dogs fed raw meat were significantly younger on average, which has been shown to have protective effects, improving health outcomes. Additionally, factors unrelated to health may have improved apparent outcomes for dogs fed raw meat, with the proportion of dogs who had not seen a vet in the last year markedly higher in this group. If ages were equalised and non-health related barriers to visiting the vet were accounted for, the researchers say it is not possible to conclude that dogs fed raw meat diets would be likely to have health outcomes superior to those fed vegan diets. The researchers also looked at the prevalence of 22 specific health disorders, based on predictions by vet assessments. Health disorders included problems with their skin/coat, dental issues, allergic dermatitis and arthritis - the most common disorders experienced by dogs. Percentages of dogs in each dietary group considered to have suffered from health disorders were 49 per cent for conventional meat diets, 43 per cent for raw meat diets and 36 per cent for vegan diets. Additionally, previous research indicates that raw meat diets are often associated with dietary risks, particularly pathogens such as bacteria and parasites, which are more common in such dogs, as well as their guardians - indicating concurrent risks to humans sharing their households with such dogs. Andrew Knight, Professor of Animal Welfare and Ethics and Founding Director of the Centre for Animal Welfare at the University of Winchester, said: "Pooled evidence to date from our study and others in this field indicates that the healthiest and least harmful dietary choice for dogs among conventional, raw meat and vegan diets, is a nutritionally-sound vegan diet. "Vegan diets are among a range of alternative diets being developed to address increasing concerns of consumers about traditional meat-based pet foods, including their environmental 'pawprint', their perceived lack of 'naturalness', health concerns, or impacts on those animals in the food chain used to formulate such diets. "Regardless of ingredients used, diets should always be formulated to be nutritionally complete and balanced, without which adverse health effects may eventually be expected to occur," he added. "Among the dog owners taking part in the study, the health of their pet was one of the most important considerations in choosing a diet," said Dr Hazel Brown, co-author of the study at the University of Winchester. "Alternative diets and pet foods offer benefits to both environmental sustainability and the welfare of farmed animals which are processed into pet foods, but many pet owners worry that they may harm the welfare of pets. There is no evidence that biological and practical challenges in formulating nutritionally adequate canine vegan diets mean their use should not be recommended." The study Vegan versus meat-based dog food: guardian-reported indicators of health is published in PLOS ONE and is available online at this link.
Vegan diets may be the healthiest to feed pet dogs, say researchers Nutritionally-sound vegan diets are the healthiest and least hazardous choices for owners to feed their pet dogs, according to the authors of a new research study. The study of 2,639 dogs and their owners, led by the University of Winchester, is among the first large-scale studies to explore how health outcomes vary between dogs fed meat-based or vegan diets. For the research, dog owners provided information about one dog which was fed either a conventional meat, raw meat or vegan diet for at least one year. The researchers looked at seven general indicators of ill health in dogs - including unusually high numbers of visits to the vet; whether the dog took medication; and the percentage of unwell dogs - and 22 of the most common canine health disorders. Dog owners were asked to report their own opinion of their dog's health and also what they believed their vet's assessment to be. The findings show that, considering all seven general indicators of health, dogs fed conventional meat diets appeared to be less healthy than those fed either a raw meat or a vegan diet. They had poorer health indicators in almost all cases. Dogs fed raw meat diets appeared to fare marginally better than those fed vegan diets. However, the effect sizes were statistically small, in every case. Additionally, the dogs fed raw meat were significantly younger on average, which has been shown to have protective effects, improving health outcomes. Additionally, factors unrelated to health may have improved apparent outcomes for dogs fed raw meat, with the proportion of dogs who had not seen a vet in the last year markedly higher in this group. If ages were equalised and non-health related barriers to visiting the vet were accounted for, the researchers say it is not possible to conclude that dogs fed raw meat diets would be likely to have health outcomes superior to those fed vegan diets. The researchers also looked at the prevalence of 22 specific health disorders, based on predictions by vet assessments.
yes
Veterinary Science
Are vegan diets healthy for dogs?
no_statement
"vegan" "diets" are not "healthy" for "dogs".. "dogs" should not be fed a "vegan" "diet" for optimal health.
https://www.euronews.com/next/2022/04/13/should-dogs-go-vegan-large-scale-study-finds-plant-based-diet-is-healthiest-and-safest
Should dogs go vegan? Large-scale study suggests plant-based ...
More humans are switching to plant-based diets, so why not man’s best friend? Is it time for dogs to go vegan? There is a growing movement promoting vegan dog diets, with Formula 1 legend Lewis Hamilton the most prominent proponent. ADVERTISEMENT Now in what the authors believe is the first large-scale study comparing vegan with meaty dog diets, the results suggest a nutritionally sound vegan diet could bring health benefits and fewer hazards for man’s best friend. The survey study of the guardians of 2,536 dogs looked at links between dog diets and health outcomes. The dogs were fed either a conventional meat diet, a raw meat diet, or a vegan diet. The survey included questions about the dogs’ health, including how many times they visited the vet, what medication they were on, and specific dog health issues. Andrew Knight, professor of animal welfare and ethics at the University of Winchester in the UK and the lead author of the study, said its findings were clear. “We found the diet that was healthiest and least hazardous for the dogs was a nutritionally sound vegan diet,” he told Euronews Next. Conventional diets ‘least healthy’ The findings suggest that dogs on conventional meat diets were less healthy than dogs on raw meat or vegan diets, with more non-routine visits to the vet, a higher use of medication, a higher proportion being put on therapeutic diets, and a higher proportion being reported by owners to have health problems. The outcomes were slightly better for dogs on raw meat diets over vegan diets, although the authors argue these are not statistically significant and that other factors such as the age of the dogs need to be factored into future studies to get a clearer picture. Knight points out, meanwhile, that there is a growing body of evidence regarding the hazards of a raw meat diet - hazards that are not linked to vegan diets. “There’s a considerable body of study showing they are associated with bacterial and parasitic pathogens and protozoa [Editors’ note: single-celled organisms], which are more prevalent in the dogs and also the people in the same household, so they’re getting them either from the dog or the food they’re preparing,” he said. ‘Superbug hazard’ in raw dog food A paper presented to the European Congress of Clinical Microbiology & Infectious Diseases last year warned that "the trend for feeding dogs raw food may be fuelling the spread of antibiotic-resistant bacteria." Researchers from UCIBIO, Faculty of Pharmacy at the University of Porto in Portugal examined various dog food samples from supermarkets and pet shops. ADVERTISEMENT The study revealed that Enterococci, a genus of bacteria commonly found in human intestines, was present in more than half of the analysed samples. This type of bacteria is often intrinsically resistant to antibiotics, meaning some species of Enterococci can lead to dangerous disease outbreaks. The paper's authors cautioned that with an estimated 90 million pet dogs in Europe, and nearly 500 million worldwide, dog food may be a dangerously overlooked source of antibiotic resistance globally. The study did suggest slightly better health outcomes for dogs on raw meat diets, but Knight cautioned that settling this required further studies. “What we are clear on is that raw meat diets are associated with significant dietary hazards, particularly the pathogens,” he said. Vegan dog food: a booming market According to market research firm The Insight Partners, the vegan pet food market is expected to be worth $15.6 billion (€14.4 billion) by 2028, up from $8.6 billion (€7.9 billion) in 2020. Knight said he was hearing from more and more people wanting to launch vegan pet food start-ups, as the boom in plant-based foods for humans spills over to our pets. There is currently a lack of robust data mapping the health consequences of feeding a vegan diet to a large number of dogs over many years Justine Shotton President, British Veterinary Association “One of the main concerns people have had is whether their cats and dogs can be healthy on these diets,” he said, adding that interest in the diets would increase further when more results of studies come out. Those concerns are echoed by the British Veterinary Association, which does not currently recommend feeding dogs a vegan diet. “There is a lot of ongoing research and scientific interest in the field of vegan dog diets and this paper adds to the body of evidence supporting its benefits,” said BVA President Justine Shotton. ADVERTISEMENT “However, there is currently a lack of robust data mapping the health consequences of feeding a vegan diet to a large number of dogs over many years, so we look forward to seeing further research on whether non-animal protein sources can meet a dog’s dietary requirements over the long term,” she added. She warned that owners who want to feed their pets a vegan diet should “take expert veterinary advice to avoid dietary deficiencies and associated disease”, and that “it is much easier to get the balance of nutrients wrong than to get it right”.
More humans are switching to plant-based diets, so why not man’s best friend? Is it time for dogs to go vegan? There is a growing movement promoting vegan dog diets, with Formula 1 legend Lewis Hamilton the most prominent proponent. ADVERTISEMENT Now in what the authors believe is the first large-scale study comparing vegan with meaty dog diets, the results suggest a nutritionally sound vegan diet could bring health benefits and fewer hazards for man’s best friend. The survey study of the guardians of 2,536 dogs looked at links between dog diets and health outcomes. The dogs were fed either a conventional meat diet, a raw meat diet, or a vegan diet. The survey included questions about the dogs’ health, including how many times they visited the vet, what medication they were on, and specific dog health issues. Andrew Knight, professor of animal welfare and ethics at the University of Winchester in the UK and the lead author of the study, said its findings were clear. “We found the diet that was healthiest and least hazardous for the dogs was a nutritionally sound vegan diet,” he told Euronews Next. Conventional diets ‘least healthy’ The findings suggest that dogs on conventional meat diets were less healthy than dogs on raw meat or vegan diets, with more non-routine visits to the vet, a higher use of medication, a higher proportion being put on therapeutic diets, and a higher proportion being reported by owners to have health problems. The outcomes were slightly better for dogs on raw meat diets over vegan diets, although the authors argue these are not statistically significant and that other factors such as the age of the dogs need to be factored into future studies to get a clearer picture. Knight points out, meanwhile, that there is a growing body of evidence regarding the hazards of a raw meat diet - hazards that are not linked to vegan diets.
yes
Informatics
Are video games harmful to children's mental health?
yes_statement
"video" "games" have a "harmful" impact on "children"'s "mental" "health".. "children"'s "mental" "health" is negatively affected by "video" "games".
https://www.apa.org/news/press/releases/2010/06/violent-video-games
Violent video games may increase aggression in some but not ...
Violent Video Games May Increase Aggression in Some But Not Others, Says New Research American Psychological Association. (2010, June 7). Violent video games may increase aggression in some but not others, says new research [Press release]. https://www.apa.org/news/press/releases/2010/06/violent-video-games Comment: WASHINGTON – Playing violent video games can make some adolescents more hostile, particularly those who are less agreeable, less conscientious and easily angered. But for others, it may offer opportunities to learn new skills and improve social networking. In a special issue of the journal Review of General Psychology, published in June by the American Psychological Association, researchers looked at several studies that examined the potential uses of video games as a way to improve visual/spatial skills, as a health aid to help manage diabetes or pain and as a tool to complement psychotherapy. One study examined the negative effects of violent video games on some people. “Much of the attention to video game research has been negative, focusing on potential harm related to addiction, aggression and lowered school performance,” said Christopher J. Ferguson, PhD, of Texas A&M International University and guest editor of the issue. “Recent research has shown that as video games have become more popular, children in the United States and Europe are having fewer behavior problems, are less violent and score better on standardized tests. Violent video games have not created the generation of problem youth so often feared.” In contrast, one study in the special issue shows that video game violence can increase aggression in some individuals, depending on their personalities. In his research, Patrick Markey, PhD, determined that a certain combination of personality traits can help predict which young people will be more adversely affected by violent video games. “Previous research has shown us that personality traits like psychoticism and aggressiveness intensify the negative effects of violent video games and we wanted to find out why,” said Markey. Markey used the most popular psychological model of personality traits, called the Five-Factor Model, to examine these effects. The model scientifically classifies five personality traits: neuroticism, extraversion, openness to experience, agreeableness and conscientiousness. Analysis of the model showed a “perfect storm” of traits for children who are most likely to become hostile after playing violent video games, according to Markey. Those traits are: high neuroticism (e.g., easily upset, angry, depressed, emotional, etc.), low agreeableness (e.g., little concern for others, indifferent to others feelings, cold, etc.) and low conscientiousness (e.g., break rules, don’t keep promises, act without thinking, etc.). Markey then created his own model, focusing on these three traits, and used it to help predict the effects of violent video games in a sample of 118 teenagers. Each participant played a violent or a non-violent video game and had his or her hostility levels assessed. The teenagers who were highly neurotic, less agreeable and less conscientious tended to be most adversely affected by violent video games, whereas participants who did not possess these personality characteristics were either unaffected or only slightly negatively affected by violent video games. “These results suggest that it is the simultaneous combination of these personality traits which yield a more powerful predictor of violent video games,” said Markey. “Those who are negatively affected have pre-existing dispositions, which make them susceptible to such violent media.” “Violent video games are like peanut butter,” said Ferguson. “They are harmless for the vast majority of kids but are harmful to a small minority with pre-existing personality or mental health problems.” The special issue also features articles on the positives of video game play, including as a learning tool. For example: Video games serve a wide range of emotional, social and intellectual needs, according to a survey of 1,254 seventh and eighth graders. The study’s author, Cheryl Olson, PhD, also offers tips to parents on how to minimize potential harm from video games (i.e., supervised play, asking kids why they play certain games, playing video games with their children). Commercial video games have been shown to help engage and treat patients, especially children, in healthcare settings, according to a research review by Pamela Kato, PhD. For example, some specially tailored video games can help patients with pain management, diabetes treatment and prevention of asthma attacks. Video games in mental health care settings may help young patients become more cooperative and enthusiastic about psychotherapy. T. Atilla Ceranoglu, M.D., found in his research review that video games can complement the psychological assessment of youth by evaluating cognitive skills and help clarify conflicts during the therapy process. Contact Dr. Christopher Ferguson by email or by phone at (956) 326-2636. The American Psychological Association, in Washington, D.C., is the largest scientific and professional organization representing psychology in the United States and is the world's largest association of psychologists. APA's membership includes more than 152,000 researchers, educators, clinicians, consultants and students. Through its divisions in 54 subfields of psychology and affiliations with 60 state, territorial and Canadian provincial associations, APA works to advance psychology as a science, as a profession and as a means of promoting health, education and human welfare.
“These results suggest that it is the simultaneous combination of these personality traits which yield a more powerful predictor of violent video games,” said Markey. “Those who are negatively affected have pre-existing dispositions, which make them susceptible to such violent media.” “Violent video games are like peanut butter,” said Ferguson. “They are harmless for the vast majority of kids but are harmful to a small minority with pre-existing personality or mental health problems.” The special issue also features articles on the positives of video game play, including as a learning tool. For example: Video games serve a wide range of emotional, social and intellectual needs, according to a survey of 1,254 seventh and eighth graders. The study’s author, Cheryl Olson, PhD, also offers tips to parents on how to minimize potential harm from video games (i.e., supervised play, asking kids why they play certain games, playing video games with their children). Commercial video games have been shown to help engage and treat patients, especially children, in healthcare settings, according to a research review by Pamela Kato, PhD. For example, some specially tailored video games can help patients with pain management, diabetes treatment and prevention of asthma attacks. Video games in mental health care settings may help young patients become more cooperative and enthusiastic about psychotherapy. T. Atilla Ceranoglu, M.D., found in his research review that video games can complement the psychological assessment of youth by evaluating cognitive skills and help clarify conflicts during the therapy process. Contact Dr. Christopher Ferguson by email or by phone at (956) 326-2636. The American Psychological Association, in Washington, D.C., is the largest scientific and professional organization representing psychology in the United States and is the world's largest association of psychologists. APA's membership includes more than 152,000 researchers, educators, clinicians, consultants and students. Through its divisions in 54 subfields of psychology and affiliations with 60 state, territorial and Canadian provincial associations,
no
Informatics
Are video games harmful to children's mental health?
yes_statement
"video" "games" have a "harmful" impact on "children"'s "mental" "health".. "children"'s "mental" "health" is negatively affected by "video" "games".
https://bmcpublichealth.biomedcentral.com/articles/10.1186/1471-2458-10-286
Correlates of video games playing among adolescents in an Islamic ...
Abstract Background No study has ever explored the prevalence and correlates of video game playing among children in the Islamic Republic of Iran. This study describes patterns and correlates of excessive video game use in a random sample of middle-school students in Iran. Specifically, we examine the relationship between video game playing and psychological well-being, aggressive behaviors, and adolescents' perceived threat of video-computer game playing. Methods This cross-sectional study was performed with a random sample of 444 adolescents recruited from eight middle schools. A self-administered, anonymous questionnaire covered socio-demographics, video gaming behaviors, mental health status, self-reported aggressive behaviors, and perceived side effects of video game playing. Results Overall, participants spent an average of 6.3 hours per week playing video games. Moreover, 47% of participants reported that they had played one or more intensely violent games. Non-gamers reported suffering poorer mental health compared to excessive gamers. Both non-gamers and excessive gamers overall reported suffering poorer mental health compared to low or moderate players. Participants who initiated gaming at younger ages were more likely to score poorer in mental health measures. Participants' self-reported aggressive behaviors were associated with length of gaming. Boys, but not girls, who reported playing video games excessively showed more aggressive behaviors. A multiple binary logistic regression shows that when controlling for other variables, older students, those who perceived less serious side effects of video gaming, and those who have personal computers, were more likely to report that they had played video games excessively. Conclusion Our data show a curvilinear relationship between video game playing and mental health outcomes, with "moderate" gamers faring best and "excessive" gamers showing mild increases in problematic behaviors. Interestingly, "non-gamers" clearly show the worst outcomes. Therefore, both children and parents of non-game players should be updated about the positive impact of moderate video gaming. Educational interventions should also be designed to educate adolescents and their parents of the possible harmful impact of excessive video game playing on their health and psychosocial functioning. Background Playing video games is now a major leisurely pursuit among children in many parts of the world [1–12]. Over the past three decades, a number of studies have looked at the effects of video games on children and adolescents. These studies were conducted mostly in developed high-income countries. Several of these studies have shown that violent video game exposure increases aggressive thoughts, angry feelings, physiological arousal, aggressive behaviors, and physiological desensitization to violence in the real world [1–11]. However, several other studies have found no connection between exposure to video game violence and youth violence [12–18]. Conducting a meta-analytic review of studies that examines the impact of violent media on aggressive behavior, Ferguson and colleagues (2009) reported that the use of poor aggression measures in most studies have inflated the effect size. Once corrected for methodological bias, they found no support for this notion that media violence leads to aggressive behavior [14]. Therefore, the evidence for the harmful effects of video game violence is, in fact, inconsistent. Regardless, evidence suggests that the prevalence of video games, especially violent video games, among adolescents from low- and middle-income countries is increasing dramatically and requires additional investigation to evaluate the connection between violent video games and aggressive behaviors. Lack or insufficient reinforcement of copyright protections and the selling of rated video games to children in these countries have intensified public concerns on the possible negative impact of violent video games on aggressive cognitions, attitudes, behaviors, academic performance, and the psychological well-being of children. This study describes patterns and correlates of excessive video game use in a random sample of middle-school boys and girls in the Islamic Republic of Iran. Specifically, we examine the relationship between video game playing and psychological well-being, aggressive behaviors, and adolescents' perceived threat of video-computer game playing. To the best of our knowledge, no study has ever explored the prevalence and correlates of video games among children in Iran. The impact of violent video games on the attitudes, behaviors, and mental health of the youth in Middle-Eastern countries may be intensified or indeed suppressed by several ongoing violent conflicts in the region. Similar to the rest of Middle-Eastern countries, Iranians have constantly been exposed to well-publicized, broadcasted, excessive violence in the region, including an eight-year Iran-Iraq war, the first Persian Gulf war (Iraq vs. Kuwait and US-collection), the U.S. occupation of Afghanistan and Iraq following the tragedy of 9-11, almost 50 years of on-going Palestinian-Israeli conflict, as well as almost daily suicide and/or car bombings etc. by individuals in this region. Iran is the most populated country in the Middle East with a population of over 70 million. More than two-thirds of its population is under the age of 30 [19, 20]. Methods Sample This cross-sectional study was performed with a random sample of eight schools selected from all 26 middle schools in the city of Hamadan, which is located in central Iran, with a population of over 500,000 people [21]. Almost 1950 sixth to eighth graders were registered in the eight randomly selected middle schools. One classroom from each school which includes 477 of adolescents (almost one out of four students) was selected to participate in this study. Determining the sample size for this study, we specified the difference between the largest mean and the smallest mean for GHQ-28 as the effect size (delta) and we hypothesized that all means for GHQ-28 other than the two extreme ones (non-gamers and excessive gamers) are equal to the grand mean. We calculated the sample size for three different effect size delta values (0.25, 0.75, 1.25) corresponding to "small", "medium", and "large" effects, according to Cohen & Cohen, Statistical Power Analysis for the Behavioral Sciences. The initial calculation indicates that using a medium effect size, we need at least 45 subjects in each group (180 participants) in order to detect a reasonable departure from the null hypothesis (α = 0.05 and 1 - β = 85%). However, our preliminary investigation showed that only 10% of students were non-players, therefore, if we would had selected 180 participants only 18 (180 × 10%) of them would be the non-players. Therefore, we increased the sample size by 250% (n = 180 × 250% = 450) to have 45 non-players in our sample. This survey was self-administered in the absence of the instructor and collected by research associates of this project who were unaffiliated with the schools during regular classroom hours. To increase the validity of the responses, efforts were made to guarantee complete anonymity. Students were given a brief introduction and were asked not to write their name or any other identifiable information anywhere on the survey. The survey was conducted from February to March 2008. This study was conducted with approval from Hamadan University's Institutional Review Board. Informed assent and consent were obtained from participants and their parents/guardians. The survey instrument was pilot tested with 20 students and modified accordingly. The questionnaire was administered in Farsi, the official language of Iranian schools. Regardless of ethnicity or background, all Iranian students speak and write Farsi. Measures Demographics and video-game playing In addition to demographic characteristics, survey instruments included several items that were specifically designed to capture the amount of time students spend playing video games during weekdays and weekends. Commitments to video game use, the average and longest duration of play, and the type of video games were assessed. Mental health status The validated 28-item Farsi version of the General Health Questionnaire (GHQ-28) [22] was used to assess the mental health status of participants. The GHQ refers to subjective symptoms of psychological distress, somatic manifestations often associated with anxiety and depression, relationship difficulties, and social, family, and professional roles [23]. The GHQ-28 is composed of 4 subscales (score range, 1-7): somatization, anxiety, social dysfunction, and depression. Both subscales and summated total scores were used [23, 24]. All items have a 4 point scoring system using Likert scoring (0-1-2-3). A higher score on the GHQ-28 represents poorer mental health status. In our sample the Cronbach's alpha coefficients of reliability of the subscales vary around 0.75 and the internal consistency of the total scale is 0.90. The participants were classified into higher and lower GHQ-28 groups based on a cutoff score of 8. Aggression The Orpinas' aggression scale was used to measure aggressive behaviors of students [25]. The scale consists of 11 items designed to measure self-reported aggressive behaviors among school-aged students that might result in psychological or physical injury to other students. The scale requests information regarding the frequency of the most common overt aggressive behaviors, including verbal aggression (teasing, name-calling, encouraging students to fight, threatening to hurt or hit) and physical aggression (pushing, slapping, kicking, hitting), as well as information about anger (getting angry easily, being angry most of the day) [25]. The internal consistency of the scale showed coefficient alpha of 0.89. Perceived side effects of video-computer games A 5-item rating scale was used to gauge student beliefs on the side effects of playing excessive video games. The following are examples of the 5-item rating scale: "I believe that excessively playing video games has negative affects on educational performance;" and "I believe that obesity could be a side effect of excessively playing video games." Each of these items was measured on an ordinal 5-point Likert-type scale (1 = certainly disagree, 5 = certainly agree). Higher scores on the scale indicated a greater perceived threat of video-computer games. The internal consistency reliability of this scale was examined using Cronbach's coefficient alpha (α = 0.72). Video violence exposure Based on previous content analyses of popular video games among Iranian children, video games that were reported to be played by participants were categorized into two major groups: violent and non-violent games. Additionally, one of the co-authors of this study personally examined the content of each game to verify whether games are correctly categorized. Statistical Analysis Descriptive statistics were calculated to identify the distribution and frequency of all items which were subsequently used to construct the independent scales and indices for examination. In the bivariate analysis, chi-square tests were performed to measure the association between independent and outcome variables. A series of logistic regression and descriptive analyses was computed. Interrelationships between independent variables were examined to assess potential multicolinearity between independent variables. All statistical analyses were performed using the statistical software package SPSS (SPSS Inc., Chicago, Illinois, USA). Results Of 477 adolescents who were selected to participate in this study, 444 students (93%) completed the survey. Less than 7% of students refused to participate in the study or did not complete the survey. Since the survey was anonymous, we were unable to record and compare the non-responders (n = 31) with participants. Table 1 shows demographic and gaming characteristics, aggressive behaviors, and the mental heath status of our samples. About 47% and 53% of the participants were 12-13 and 14-15 years old, respectively. The sample was 51% female and 49% male. More than 93% of students who participated in this study reported that they have played video games. The initiation ages for video-game playing were 6%, 11%, 45%, and 40% for age groups ≤5; 6-7; 8-10; and 11-13 years of age, respectively. While computer games were used by 71%, home consoles by 16%, and internet games by 13%, no arcade video game machines were reported by our participants (the arcade video game machines currently are not offered in Hamadan). More than 57% of children reported that they had a computer or game console in their bedroom. Both girls and boys spent an average of 6.3, and 6.2 hours per week playing video gaming, respectively. Moreover, 47% of participants reported that they had played one or more intensely violent games including: Dead or Alive, Def Jam, Doom, Driver, Mortal Kombat, Grand Theft Auto, Resident Evil, and Prince of Persia. The bivariate correlates of video gaming are reported in Table 2. Participants' mental health status, measured by GHQ-28, showed a significant relationship with initiation age and years of video game playing, indicating that those who initiated gaming at younger ages and have been playing for longer years were more likely to score poorer in mental health measures. No relationship between severe arguments with parents and gaming was detected. Quite interesting were those participants who reported longer years and hours playing games - they also reported a higher number of the negative side effects of video gaming. Participants' aggressive behaviors were associated with length of gaming; those who admitted playing longer, scored higher on the Orpinas' aggression scale (r = .15; p < .05). Further analysis shows that none of the 11 items of this scale were associated with length of time playing video games. However, boys who reported playing video games excessively - more than 7 hours per week - scored higher on: 1) fighting back; 2) name-calling; 3) teasing; and 4) threatening to hurt or hit items. None of the aggressive behaviors scale's items were associated with excessive playing among girls. Table 3 compares the mean score for the General Mental Health Status scale (GHQ-28) and the sub-scales of this index for four groups (non-players to excessive players). Non-gamers and "excessive gamers" overall reported suffering poorer mental health compared to low or moderate players. Additionally, non-gamers and "excessive gamers" reported higher scores on three out of four sub-scales, indicating that they were more likely than low or moderate players to suffer from anxiety/insomnia, social dysfunction, and somatic symptoms. Table 3 Mean score of total and sub-scale scores on the 28-item General Health Questionnaire for video gamers and non-gamers (N = 444). The multiple binary logistic regression technique was employed to examine independent correlates of excessive video game playing. Independent variables included in the equation are reported in Table 4. The estimated Nagelkerke R2 indicates that this set of variables/subscales explain over 14% of the variance in the dependent variable. Age, personal computer ownership, and perceived side effects of playing video games each showed independent impact on the outcome variables. This finding indicates that, controlling for all other variables, older students [OR = 1.34 (95% CI = 1.04 - 1.74), p < .05], students who perceived less serious side effects for playing video games [OR = 3.5 (95% CI = 0.66 - 0.98), p < .01], and those who have personal computers [OR = 3.4 (95% CI = 2.13 - 5.41), p < .05] were more likely to report that they had played video games excessively. No significant independent relationship between parental demographic characteristics and excessive video gaming was detected. Discussion This cross-sectional study was performed to describe patterns and correlates of video game use in a random sample of middle-school students (aged 12 - 15) in the Islamic Republic of Iran. Similar to American teens [26], our data show that nearly every teen (93%) plays video games, regardless of gender, age, or socioeconomic status. Overall, our participants spent an average of 6.3 hours per week playing video gaming. Additionally, more than 15% of participants spend more than 10 hours per week playing video games. The stereotype that only boys play video games is far from true in Iran, as girls constitute a larger percentage of total video gamers: 92% of boys play video games, as do 96% of girls. While almost all girls and boys play video games, boys typically play games with greater duration than girls per setting; 11% and 6% of boys and girls spend more than 4 hours per setting, respectively. One out of two adolescents surveyed in this study reported that they had played one or more intensely violent games. One aspect of the results that warrants further discussion is the impact of the non-gaming and excessive video game playing on mental health status. Importantly, non-gamers and excessive gamers overall reported suffering poorer mental health compared to moderate players. As indicated in Table 4, we have detected a curvilinear relationship between video game playing and mental health outcomes, with "moderate" gamers faring best. While "excessive" gamers showed mild increases in problematic behaviors (somatic symptoms; anxiety and insomnia; social dysfunction, and general mental health status), non-gamers showed the worst outcomes. Other researchers have documented the same effect. Kutner and Olson (2008), surveyed 1,254 children in grades 7 and 8, and 500 of their parents. They found that boys who did not play any video games during a typical week had a higher risk for problems [27]. They also document many creative, social, and emotional benefits from video game play that were used by many children to relieve stress. However, their surveys also documented statistically significant relationships between violent game play and some common childhood problems among boys but not girls. Boys who extensively played any Mature-rated game had twice the risk of certain aggressive behaviors and school problems compared to boys who played games with lower age ratings[27]. Indeed, similar to our study, several other studies suggest that moderate video game use may be a positive experience, while excessive use may cause problems [28]. While research suggests that that excessive use of video games may have a detrimental effect on students' GPA, moderate use of the video games and Internet has been linked with a more positive academic orientation when it is compared with nonuse or high levels users [29]. Therefore, if parents set some limits on their children's daily video game use, then the worst of the documented problems associated with video games will be avoided [28]. Another component of the results that requires further attention has to do with the detected impact of the perceived side-effects of video game playing on excessive game playing. The multiple binary logistic regression shows that controlling for all other variables, those who perceived less serious side effects for playing video games were more likely to report that they had played video games excessively. This important finding may have extensive policy implication. Educational motivational interventions should be designed to educate adolescents of the possible harmful impact of excessive video game playing on their health and psychosocial functioning. Our findings suggest that playing video games may have different social implications for girls than for boys. Our data shows that boys, but not girls, who admitted playing video games excessively showed more aggressive behaviors. However, recent data show that moderate gaming among young men may provide a healthy source of socialization, relaxation, and coping [30]. In addition, recent studies conducted among Singaporean, Japanese, and Americans provide robust evidence that adolescents and young adults who played more pro-social video games behaved more pro-socially [31]. Therefore, strategies are needed to make pro-social video games more attractive and accessible to adolescents, particularly male adolescents. These results are not without their limitations that limit its generalizability. First, these data are cross-sectional and limit our ability to make causal inferences among the mental health status, aggressive behaviors, and video game playing. Second, we were unable to record and compare the non-responders of this study with participants and unable to document how this sample is a representative of Iranian adolescents. Third, it is important to notice that even though statistically a significant relationship between excessive gaming and mental health was detected, however, these bivariate correlations effect size fall below Ferguson's (2009) [32] recommendations for "practical significance" (r ≥ .20) and should not be over interpreted as strong evidence of harm. Conclusion It is important to inform parents that while moderate video game use may be a positive experience, excessive use may cause problems. However, both children and parents of non-players should be updated about the positive impact of moderate video gaming. Similarly, comprehensive actions are needed in order to diminish excessive time spent playing video games, particularly mature-rated games, among adolescents in general and boys in particular. Finally, strategies are needed to encourage making pro-social video games more attractive and accessible to children, particularly male adolescents who play video games excessively. Unsworth G, Devilly GJ, Ward T: The effect of playing violent video games on adolescents: Should parents be quaking in their boots?. Psychology, Crime and Law. 2007, 13: 383-394. 10.1080/10683160601060655. Colwell J, Kato M: Investigation of the relationship between social isolation, self-esteem, aggression and computer game play in Japanese adolescents. Asian Journal of Social Psychology. 2003, 6: 149-158. 10.1111/1467-839X.t01-1-00017. Corresponding author Additional information Competing interests Authors' contributions All authors read and approved the final manuscript. H.A. conceived of the study and participated in the design, data collection and analysis as well as MS preparation. M.B. participated in the data analysis and MS preparation. A.F. participated in the design and data collection. B.M. participated in the design data collection Rights and permissions Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
As indicated in Table 4, we have detected a curvilinear relationship between video game playing and mental health outcomes, with "moderate" gamers faring best. While "excessive" gamers showed mild increases in problematic behaviors (somatic symptoms; anxiety and insomnia; social dysfunction, and general mental health status), non-gamers showed the worst outcomes. Other researchers have documented the same effect. Kutner and Olson (2008), surveyed 1,254 children in grades 7 and 8, and 500 of their parents. They found that boys who did not play any video games during a typical week had a higher risk for problems [27]. They also document many creative, social, and emotional benefits from video game play that were used by many children to relieve stress. However, their surveys also documented statistically significant relationships between violent game play and some common childhood problems among boys but not girls. Boys who extensively played any Mature-rated game had twice the risk of certain aggressive behaviors and school problems compared to boys who played games with lower age ratings[27]. Indeed, similar to our study, several other studies suggest that moderate video game use may be a positive experience, while excessive use may cause problems [28]. While research suggests that that excessive use of video games may have a detrimental effect on students' GPA, moderate use of the video games and Internet has been linked with a more positive academic orientation when it is compared with nonuse or high levels users [29]. Therefore, if parents set some limits on their children's daily video game use, then the worst of the documented problems associated with video games will be avoided [28]. Another component of the results that requires further attention has to do with the detected impact of the perceived side-effects of video game playing on excessive game playing. The multiple binary logistic regression shows that controlling for all other variables, those who perceived less serious side effects for playing video games were more likely to report that they had played video games excessively. This important finding may have extensive policy implication.
no
Informatics
Are video games harmful to children's mental health?
yes_statement
"video" "games" have a "harmful" impact on "children"'s "mental" "health".. "children"'s "mental" "health" is negatively affected by "video" "games".
https://www.sciencedaily.com/releases/2010/06/100607122547.htm
Violent video games may increase aggression in some but not ...
Violent video games may increase aggression in some but not others, says new research Date: June 8, 2010 Source: American Psychological Association Summary: Playing violent video games can make some adolescents more hostile, particularly those who are less agreeable, less conscientious and easily angered. But for others, it may offer opportunities to learn new skills and improve social networking. Playing violent video games can make some adolescents more hostile, particularly those who are less agreeable, less conscientious and easily angered. But for others, it may offer opportunities to learn new skills and improve social networking. In a special issue of the journal Review of General Psychology, published in June by the American Psychological Association, researchers looked at several studies that examined the potential uses of video games as a way to improve visual/spatial skills, as a health aid to help manage diabetes or pain and as a tool to complement psychotherapy. One study examined the negative effects of violent video games on some people. "Much of the attention to video game research has been negative, focusing on potential harm related to addiction, aggression and lowered school performance," said Christopher J. Ferguson, PhD, of Texas A&M International University and guest editor of the issue. "Recent research has shown that as video games have become more popular, children in the United States and Europe are having fewer behavior problems, are less violent and score better on standardized tests. Violent video games have not created the generation of problem youth so often feared." In contrast, one study in the special issue shows that video game violence can increase aggression in some individuals, depending on their personalities. In his research, Patrick Markey, PhD, determined that a certain combination of personality traits can help predict which young people will be more adversely affected by violent video games. "Previous research has shown us that personality traits like psychoticism and aggressiveness intensify the negative effects of violent video games and we wanted to find out why," said Markey. Markey used the most popular psychological model of personality traits, called the Five-Factor Model, to examine these effects. The model scientifically classifies five personality traits: neuroticism, extraversion, openness to experience, agreeableness and conscientiousness. Analysis of the model showed a "perfect storm" of traits for children who are most likely to become hostile after playing violent video games, according to Markey. Those traits are: high neuroticism (e.g., easily upset, angry, depressed, emotional, etc.), low agreeableness (e.g., little concern for others, indifferent to others feelings, cold, etc.) and low conscientiousness (e.g., break rules, don't keep promises, act without thinking, etc.). advertisement Markey then created his own model, focusing on these three traits, and used it to help predict the effects of violent video games in a sample of 118 teenagers. Each participant played a violent or a non-violent video game and had his or her hostility levels assessed. The teenagers who were highly neurotic, less agreeable and less conscientious tended to be most adversely affected by violent video games, whereas participants who did not possess these personality characteristics were either unaffected or only slightly negatively affected by violent video games. "These results suggest that it is the simultaneous combination of these personality traits which yield a more powerful predictor of violent video games," said Markey. "Those who are negatively affected have pre-existing dispositions, which make them susceptible to such violent media." "Violent video games are like peanut butter," said Ferguson. "They are harmless for the vast majority of kids but are harmful to a small minority with pre-existing personality or mental health problems." The special issue also features articles on the positives of video game play, including as a learning tool. For example: Video games serve a wide range of emotional, social and intellectual needs, according to a survey of 1,254 seventh and eighth graders. The study's author, Cheryl Olson, PhD, also offers tips to parents on how to minimize potential harm from video games (i.e., supervised play, asking kids why they play certain games, playing video games with their children). Commercial video games have been shown to help engage and treat patients, especially children, in healthcare settings, according to a research review by Pamela Kato, PhD. For example, some specially tailored video games can help patients with pain management, diabetes treatment and prevention of asthma attacks. Video games in mental health care settings may help young patients become more cooperative and enthusiastic about psychotherapy. T. Atilla Ceranoglu, M.D., found in his research review that video games can complement the psychological assessment of youth by evaluating cognitive skills and help clarify conflicts during the therapy process. American Psychological Association. "Violent video games may increase aggression in some but not others, says new research." ScienceDaily. ScienceDaily, 8 June 2010. <www.sciencedaily.com/releases/2010/06/100607122547.htm>. American Psychological Association. (2010, June 8). Violent video games may increase aggression in some but not others, says new research. ScienceDaily. Retrieved August 14, 2023 from www.sciencedaily.com/releases/2010/06/100607122547.htm American Psychological Association. "Violent video games may increase aggression in some but not others, says new research." ScienceDaily. www.sciencedaily.com/releases/2010/06/100607122547.htm (accessed August 14, 2023). Oct. 24, 2022 — A study of nearly 2,000 children found that those who reported playing video games for three hours per day or more performed better on cognitive skills tests involving impulse control and working ... July 8, 2019 — Video games that foster creative freedom can increase creativity under certain conditions, according to new research. The experimental study compared the effect of playing Minecraft, with or without ... Apr. 23, 2019 — A new longitudinal study conducted in Norway looked at how playing video games affects the social skills of 6- to 12-year-olds. It found that playing the games affected youth differently by age and ... Jan. 29, 2019 — Move over trust falls and ropes courses, turns out playing video games with coworkers is the real path to better performance at the office. A new study by information systems professors found ...
"These results suggest that it is the simultaneous combination of these personality traits which yield a more powerful predictor of violent video games," said Markey. "Those who are negatively affected have pre-existing dispositions, which make them susceptible to such violent media. " "Violent video games are like peanut butter," said Ferguson. "They are harmless for the vast majority of kids but are harmful to a small minority with pre-existing personality or mental health problems. " The special issue also features articles on the positives of video game play, including as a learning tool. For example: Video games serve a wide range of emotional, social and intellectual needs, according to a survey of 1,254 seventh and eighth graders. The study's author, Cheryl Olson, PhD, also offers tips to parents on how to minimize potential harm from video games (i.e., supervised play, asking kids why they play certain games, playing video games with their children). Commercial video games have been shown to help engage and treat patients, especially children, in healthcare settings, according to a research review by Pamela Kato, PhD. For example, some specially tailored video games can help patients with pain management, diabetes treatment and prevention of asthma attacks. Video games in mental health care settings may help young patients become more cooperative and enthusiastic about psychotherapy. T. Atilla Ceranoglu, M.D., found in his research review that video games can complement the psychological assessment of youth by evaluating cognitive skills and help clarify conflicts during the therapy process. American Psychological Association. "Violent video games may increase aggression in some but not others, says new research." ScienceDaily. ScienceDaily, 8 June 2010. <www.sciencedaily.com/releases/2010/06/100607122547.htm>. American Psychological Association. (2010, June 8). Violent video games may increase aggression in some but not others, says new research. ScienceDaily.
no
Informatics
Are video games harmful to children's mental health?
yes_statement
"video" "games" have a "harmful" impact on "children"'s "mental" "health".. "children"'s "mental" "health" is negatively affected by "video" "games".
https://www.mentalup.co/blog/positive-and-negative-effects-of-technology-on-children
Positive and Negative Effects of Technology on Children | MentalUP
Positive and Negative Effects of Technology on Children Is Technology Bad for Kids? All parents want to know whether digital technology is making children's lives better or miserable. Before talking about the positive and negative effects of technology on child development, let us remind ourselves that almost none of us can imagine life without it. So, Is technology bad for kids or not? It's clear that the case of unconscious use of technology can cause children to face its negative effects. Positive and Negative Effects of Technology on Child Development If an adult uses technology in a harmful way, we can call it a “choice” instead of “unconsciousness”. But it’s different for children. They are generally not aware of the possible bad sides of technology. Questions like these, “How does the technology work for kids, is technology good or bad for kids, how to stay safe on the internet for kids?” preoccupy parents’ minds. 💡 TIP: By applying the internet safety guidance for kids that we'll share on and choosing the right apps, websites, and learning games for toddlers and older, you can provide internet safety. Let's look into the positive effects of technology on child development that you can benefit from and the negative effects of technology on children's health and development that you should avoid. 10 Positive Effects of Technology 1. Mental Skill Development with Educational Contents The positive effects of the internet on early childhood education can be very beneficial. Educational and instructional practices, such as apps that offer brain games for kids improve cognitive skills, are among these benefits. There are several educational games for kids like memory games, attention games, and math games at their fingertips today. Therefore, safe internet use for children can be useful with the right apps, such as MentalUP, allowing them to enjoy while supporting their development and academic success. Here are some of the best examples of how beneficial technology can be joyful: Logic Game Kids can develop their logic, strategic thinking, and visualization skills with these kinds of gamified exercises. 2. Increase in IQ American Cornell University Tomoe Kanaya conducted scientific research in nine schools among members of different races and social groups. According to this study, conducted on approximately 9,000 students, the IQ scores of the current generation are higher than the previous generation. Taking into account the Flynn effect, the results do not change, although every new IQ test is more difficult compared to the previous ones. Experts link this effect directly to technology. The increase in the number of technological products in children's lives and, consequently, the increased stimuli serve as exercises to enable children to solve more complex problems. The positive effects of technology on child development, such as increased IQ scores from the Stanford Binet Test are some of the positive impacts of the internet with the correct use of technology. “The more subjects a child in a fast learning process gets interested in or has experience with, the more his/her IQ level increases” - Psychologist Ferahim Yeşilyurt Of course, all these benefits are possible with the conscious use of technology. On the other hand, it should be kept in mind that an excess of stimuli leads to distraction. The duty of ensuring internet safety for kids is mainly on the parents. It is a great way to make the most of the positive effects of the internet by using MentalUP! It has more than 150 games to reinforce your kids’ visual, attention, memory, logic, and linguistics skills. 💡 Kids love MentalUP’s 20-minute daily workouts! In addition, MentalUP doesn't have any router ads on other websites, apps, or platforms. So, it works as a source of the best safe kids games that are suitable for different age groups. 😇 You can perfectly meet your child's desire to play mobile games with useful apps with MentalUP. 🙌 3. Learning Enhancement Effects of gadgets and the internet on learning are great examples of positive effects of the internet on students. “Learning how to learn” is a very important skill. Thanks to the Internet, there is a library with an infinite number of books in every house. Children can do extensive research on the subjects they wonder about. In this way, they are learning to access information, do research, and learn, and safe internet use for children comes with many benefits. 4. Problem-Solving Skills Is technology good or bad for kids? Visual design programs, technical drawing programs, coding programs, and similar design tools improve children's creativity. So it's not a matter of good or bad - it’s all about how, when, and what you use. Creative children are successful in adapting to the changing world. Likewise, they are successful in creating new solutions to new emerging problems. Enhancing problem-solving skills in childhood will allow them to overcome obstacles easier in the future. 5. Parental Control in Emergency Situations The positive impact of technology on kids can sometimes be a lifesaver for your child and society. Your children should be able to reach you in case of an emergency, even if they are not old enough to use a mobile phone. Simple mobile devices like smartwatches are produced to meet this need in a healthy way. Smartwatches have various features, and they are cool gifts for holidays. For example, you can set the numbers that your child can call. Likewise, you can determine which numbers your child can receive calls from. In this way, you can eliminate the risk of your kid communicating with people he/she does not know. In schools, children are taught the numbers they should call in emergency situations. You can add the numbers of the ambulance and the fire services to their contact list. 6. Easy Access to Information All the children are curious at an early age and they always want to be acquainted with the outer world. In this sense, the world wide web can help them to understand what they want to learn. Because the parents can’t answer all the questions their kids ask, children can search for all the things they are curious about and expand their perspectives. One of the most important positive effects of the internet on a child is access to information easily. You can guide your children to discover their object of interest and provide them with a healthy environment to spend time efficiently. 7. Foreign Language Learning As far as the children are concerned, lots of parents want them to learn a foreign language that helps kids to fit in the modern world. Thanks to technological improvements, today it is easier to access resources. Kids can learn a foreign language quickly by using online tools, fun games, and exercises without being bored. You can provide them with the right sources and develop their language skills. Some of the positive points of the internet are that children can start learning a foreign language online with just a little cost. Apps that are tailored for this purpose are one of the best examples of it. 8. Critical Thinking Development Thanks to online interactive games and mental exercises, modern children start to develop critical thinking skills earlier. These educational games and exercises help kids to grow intellectually by boosting their strategic thinking skills. Because these online games and exercises appeal to all the kids who have different interests, they can have fun while improving at the same time. This is one of the most crucial positive effects of the internet due to the fact that it helps kids’ grades at school. 9. More Active Life Modern world and city life don’t provide children with spacious environments for them to be active. They often can’t run and move around as they want to. At this point, technology rushes to help us. Because most of the time, kids are in the house, they stay still and don’t satisfy their need to move.But if they start to use the appropriate apps, like fitness apps for kids, they can have a more active life regardless of the space they have. MentalUP provides 240+ fitness exercises for kids that they can work out daily. Thanks to its scientifically approved features, your kids will have a more active life even if they stay in the house and feel more relieved when it is time to study! 🧠💪 10. Entertainment Technology is a huge entertainment universe for both children and adults. When it is used as a safe tool, your kids can benefit from fun video games, puzzles, riddles, and other things to spend their leisure time. You don’t need to be afraid that your children are harmed because of the internet. You should just be careful about which websites your kids enter. So you can protect them from the negative effects of computers on child development. Now let’s look at these negative effects that you need to be aware of. 10 Negative Effects of Technology We’ve talked about the wonders technology can provide, so now it’s time to discuss the negative effects of internet on child caused by misuse. The overuse of technology and unsupervised access to the Internet may cause many negative effects on children's social skills, as well as their mental and physical development. 1. Lower Attention Span Spending too much time using the most common technological devices such as computers, smartphones, and tablets can cause distraction and concentration difficulties. One of the negative effects of technology on children is that it may affect their academic success. As we mentioned at the very beginning, the reason for such problems is the extreme (unconscious) use of technology, not the technology itself. It should not be forgotten that technological developments primarily aim to provide benefits for us. Using the right technology in the right amount is not harmful to anyone. MentalUP is a scientifically approved app that features 150+ entertaining and educational games for kids. 🧠🌟 Thanks to its unique features such as different tracking tools, when your children’s skills are boosted, you can always observe their improvement. 🤩 And you’ll be there to discover which skills they need to work on to improve. 💪 1. Minimized Social Interaction Negative effects of the internet on a child's social skills usually appear when the child plays too many games on a computer because they become disconnected from real life. The child, who doesn’t communicate, interact and share with his/her environment, will try to meet all these needs in a virtual environment. For example, children can see the level they have achieved in a game as a respectability element. These titles, which have no importance in real life, should be replaced by skills such as respect, love, sharing, and communication. 2. Increased Aggression We all know that children have a wider imagination than adults. Therefore, they are more vulnerable to the content they come across. Children can be exposed to frightening elements in videos, cartoons, or games they watch and play. Children who are not capable of perceiving the difference between the real and the imaginary can experience unfavorable situations such as fear of being alone, nightmares, and not being able to go to the toilet alone. Similarly, when they consume videos or games containing elements of violence, they can turn into angry children. Internet safety for kids should be provided to prevent unhealthy emotions from building up. 3. Health Problems The negative impacts of technology on children are not only limited to emotional and mental disorders. The subject also has a physical dimension. The following disadvantages of technology in child development may occur due to watching TV for a long time, using a computer or a tablet in the wrong sitting position, etc.: Vision problems Neck pain Distortion of the skeletal structure of the body Arm, hand, finger numbness Overstrain We can also say that the physical development of a child who is immobilized through the use of technological devices, reduces the time that should be allocated to activities that support muscle development, such as walking, running, jumping, and playing physical games. 4. Reduced Sleep Quality The harmful effects of technology on children's health are not limited to the above-mentioned effects. Children, as well as adults, can suffer from sleeping disorders. Technology for kids should not only be supervised in terms of what they’re doing and where, but also in terms of for how long and when. The risk of developing an addiction is especially common among children and adults. How many times do you have to tell your child that it is bedtime before they actually go to their bed? 🤔 Having the right measures of Internet safety for kids will prevent both lack of sleep (which has a tremendous negative effect on school, work, and all kinds of daily life routines), and developing a gaming addiction disorder. 5. Cyberbullying, Abuse and Security Risks We have often talked about “overuse“. There are some hidden threats among the negative effects of technology on children that are not related to overuse. For example, Cyberbullying! Considering that the technologies that children are interested in can usually be linked to the Internet, we need to be aware of the external threats. In games and apps with online communication preferences, children can be exposed to bullying. They can be abused by malicious adults. Again, malicious adults can obtain personal information about the child or his family. That’s why being aware of “cybersecurity for kids” is a must. 6. Depression Risk We know that there are bad aspects of the internet other than the positive effects of technology on teenagers and children. One of the most common is depression risk. Because of technological improvements, unfortunately, children start to be less social and use social media more. If the necessary precautions aren’t taken, they can easily feel depressed and alone. The more they turn in on themselves, the more depression risk is surfaced. When you see the signs of anxiety in your kids, you need to help by having a connection or receiving support from experts. 7. Obesity Unless your children use online sources in an appropriate way, they tend to be less active. When the time of staying still is increased, they come up against the obesity problem that lots of kids get into trouble with. Because the obesity problem doesn’t show up in a short while, the parents can have a problem seeing this. Obesity risk is one of the most important negative effects of internet on a child. Don’t forget the obesity problem is defined as a modern world disorder. 8. Lower Grades We are all aware of the positive effects of internet on students but technological tools such as computers, tablets, or smartphones are used in an effective way, they can cause children to have lower grades in school. As we’ve mentioned before, when kids start to spend more time on screens, their attention span is decreased, they can feel depressed; hence, their success can be declined. Remember that increasing technology usage means the time spent on homework is diminished. 9. Emotional Problems There are positive and negative effects of internet on our daily life but if we want to guide our children to have good sides, we need to be careful about their excess usage. When the kids start to spend more time online, they are at risk of having emotional and behavioral problems. The overuse of technology can cause a delay in the emotional skills of your children. When you see it, even if you want to interfere, they can be aggressive or in a total shut-down. In these kinds of situations, you should get help from professionals. Negative Aspects of the Internet and How to Prevent Them The positive points of the internet are hard to dismiss; however, it is important to take the necessary Internet safety measures to be able to benefit from the positive effects of technology without facing the negative impacts of technology. 1. What Kind of Harmful Content are Our Children Exposed to? The first step in protecting children from harmful Internet content is knowing what kind of content they are exposed to. We can predict and prevent risk situations effectively when we have information on the content. Just like the warning on airplanes, we should first put the oxygen mask on ourselves and then help our child. One of the most favorite content among children is games. It is possible to access games on websites and App Stores. When we leave children alone, they can consume the content they want without filtering. The negative effects of technology on youth are usually caused by content we are not aware of. Naturally, they apply to free services. Parents also make the same mistake from time to time! Note: The revenue source of free services (games, apps, etc.) is usually ads. With an ad click from an innocent car-racing game, kids can switch to zombie games. ⛔️ They may encounter advertising videos with violence and fear. They can switch to betting sites or obscene sites with a single click. Safe Internet use for children is harder to maintain when it is easy to wander to different sites. ⚠️ So, keep in mind that internet usage has a negative effect to people, especially children. MentalUP is a secure playground for kids which is created by scientists and academicians. There are no ads, redirections, online chat rooms, or addictive elements. It features only skill booster games that children could benefit from in their lifetime. 👨‍👩‍👧‍👦 We just said, "Prohibition isn't the solution." Now we're talking about a significant risk! It means it’s the right time to meet your child's need for playing games by offering them games that provide the following specifications. 2. Which Specifications Do Reliable Online Games Have? No Ads No items of violence and fear No non-exemplary behavior No online communication with other players Licensed if possible Trustworthy in the public sphere Presented by a known (trusted) publisher Allow your child to play games with these and similar features for a specific period of time depending on the age. 3. Attention to Video Content YouTube has beneficial content for children. However, there is also some content that children should never watch. It's only a matter of time for children to get exposed to content such as MOMO and the Blue Whale or cartoon characters that cut each other up. Let's say - you have protected your child from dangerous games such as the Blue Whale. In the same way, do you protect your child also from opportunists who want to gain an audience by using popular topics on YouTube? The matter of safety on the Internet for kids it’s not a one-time thing - it requires constant supervision. For example: We've looked at the video of a young YouTuber investigating internet content that could be harmful to children. In the video, the horrible MOMO was claimed to be a real character that lives and threatens people. We complained about this content, but there are thousands more like this and every day new ones are added. The claim that MOMO is a living character is quite illogical for adults, but children can believe it. Then mental problems begin to develop, along with symptoms such as fear of darkness, not being able to go to the toilet alone, etc. We know that it is easy to say “Prevent your child from watching videos on the Internet!” But we also know that it is difficult to apply this. In the same way, we think of those parents who do not want to entirely ban the use of the Internet. If you want to allow your child to watch videos in a controlled way, here are some tricks you can follow: For example, you can try “YouTube Kids” instead of YouTube. The content published in YouTube Kids is checked for children before publication. But don't forget to check what your child is watching. At the same time, don't forget to inform them about imaginary characters and tell them that these characters are not real. Teaching them what technology and the internet are will help protect them from the negative effects on child development. You can also check out the child videos app called “Jellies”. According to our research, there are no ads in this app. They only provide content that can be useful for children, which provides a better environment for safe Internet surfing for kids. IMPORTANT REMINDER: While the YouTube Kids and Jellies applications seem to be reliable channels, after thorough research, we want to remind you that every parent should do this research himself/herself and that the responsibility lies with them. To sum up, you can offer your child reliable video channels instead of prohibiting video viewing. Of course, age-appropriate daily use must be taken into account. 4. Apps with Online Communication Is a Real Danger In some online games, there is the possibility of correspondence with other players. It is equally dangerous for your child to talk to a stranger outside, in the real world, and online, in the virtual world. The real danger here is that the child is not communicating with another child! The hidden danger we cannot ignore is that there might be a malicious adult on the other side. Children's emotions and thoughts can be easily manipulated. A malicious adult can mislead your child by giving him various instructions. Applications with online correspondence have many threats from the seizure of your personal information and - much worse - to child abuse! Sharing some of the tips on how children can stay safe on the Internet with your child will help them protect themselves from these vulnerabilities. 5. Age Limit for Social Networks We have talked about the risks of apps with online communication. The same risks and even more are present on social networks. Therefore, it is necessary to pay attention to the age when a child can start using social media as prescribed by ICTA. According to ICTA, children should not create a social media profile before the age of 13. Internet safety for kids and young adults starts with adhering to these limitations. 6. Always Communicate with Your Child If you are not an authoritarian parent who shakes their finger all the time and if you do not suppress the child, he/she will be more open with you. If your child is comfortable enough to consult you when they feel something suspicious, you will be aware of the potential dangers. Internet safety for kids means kids should feel safe to share their problems and insecurities with you. 7. Tell Your Child the Difference Between Imagining and the Truth Explain that he/she should not be afraid of the scary/ugly characters that may appear on the Internet and should close the content that he/she does not like. Raising awareness in this matter will reduce the negative emotional impact of technology on children. 8. Explain to Your Child That Some Applications are Not Suitable for Them In normal life, they know that they should not talk to people that they do not know on the street, so explain that the internet is not different and that the apps that can be used to communicate with others in oral or written form are not suitable for them. If they do get contacted by someone online, remind them that they should share this with you without giving any response to the message. Sharing the information on how to stay safe on the Internet with your kids will prevent your child from forming unwanted contacts. 10. Do Not Ignore the Wishes and Needs of Your Child Determine daily use time instead of banning and provide reliable content for your children. Consider each new game you want to download. Select which application to follow when you want to watch a video. Provide the necessary research and reliable content. MentalUP is a great example of reliable and useful apps that have no router ads to other websites, apps, or platforms! 🎮️ All of the educational games that MentalUP provides, work as daily exercises you can complete in 20 minutes. 🥳 With MentalUP, which is also used by educational institutions, you can meet your kids’ desire to play mobile games and help them to benefit from the good effects of internet when they are having fun in the meantime! 👏💯 Frequently Asked Questions About About Children and Technology Is technology good or bad for kids? It simply depends on how you raise them and which contents you introduce them to. Bear in mind that there are both positive and negative effects of internet. In this way, you can not only help your kids to take advantage of its benefits but also you can protect them from its harmful effects. How does technology negatively affect children? As we mentioned before, various risk factors can cause negative effects of technology. If parents let their children choose what they will spend their time on without any control, the risk maximizes. What are negative effects of internet? It’s hard to say that the internet has negative effects or positive effects. It depends on the way kids use the internet. Cyberbullying, obesity and lower grades are some of the most common risks. How does the internet affect a child's development? The internet can be a very beneficial technology if parents have enough control on it. Kids can play educational toys for 2-3 year olds and older or educational games like MentalUP offers, and even improve their cognitive skills just like their school grades. What are the dangers of technology in children? Causing of lower attention span, minimizing social interaction, and depression are some of the most common dangers that technology can create with wrong usage. What are the positive effects of technology on children? When kids use apps and websites that offer beneficial technology, they can improve their problem-solving and critical-thinking skills while having so much fun.
Negative effects of the internet on a child's social skills usually appear when the child plays too many games on a computer because they become disconnected from real life. The child, who doesn’t communicate, interact and share with his/her environment, will try to meet all these needs in a virtual environment. For example, children can see the level they have achieved in a game as a respectability element. These titles, which have no importance in real life, should be replaced by skills such as respect, love, sharing, and communication. 2. Increased Aggression We all know that children have a wider imagination than adults. Therefore, they are more vulnerable to the content they come across. Children can be exposed to frightening elements in videos, cartoons, or games they watch and play. Children who are not capable of perceiving the difference between the real and the imaginary can experience unfavorable situations such as fear of being alone, nightmares, and not being able to go to the toilet alone. Similarly, when they consume videos or games containing elements of violence, they can turn into angry children. Internet safety for kids should be provided to prevent unhealthy emotions from building up. 3. Health Problems The negative impacts of technology on children are not only limited to emotional and mental disorders. The subject also has a physical dimension. The following disadvantages of technology in child development may occur due to watching TV for a long time, using a computer or a tablet in the wrong sitting position, etc.: Vision problems Neck pain Distortion of the skeletal structure of the body Arm, hand, finger numbness Overstrain We can also say that the physical development of a child who is immobilized through the use of technological devices, reduces the time that should be allocated to activities that support muscle development, such as walking, running, jumping, and playing physical games. 4. Reduced Sleep Quality The harmful effects of technology on children's health are not limited to the above-mentioned effects. Children, as well as adults, can suffer from sleeping disorders.
yes
Informatics
Are video games harmful to children's mental health?
yes_statement
"video" "games" have a "harmful" impact on "children"'s "mental" "health".. "children"'s "mental" "health" is negatively affected by "video" "games".
https://www.brainandlife.org/articles/how-do-video-games-affect-the-developing-brains-of-children
How Do Video Games Affect Brain Development in Children and ...
By continuing to use our site, you agree to the Terms of Use and acknowledge that you’ve read our Privacy Policy. Also, this site uses cookies. Some are essential to make our site work properly, others perform functions more fully described in our Privacy Policy. By continuing to use our site, you consent to the use of these cookies. How Do Video Games Affect Brain Development in Children and Teens? At age 17, Anthony Rosner of London, England, was a hero in the World of Warcraft online gaming community. He built empires, led raids, and submerged himself in a fantasy world that seemingly fulfilled his every need. Meanwhile, his real life was virtually nonexistent. He neglected his schoolwork, relationships, health, even his hygiene. IMAGES COURTESY ANTHONY ROSNER "I never saw my real friends. I gained weight, became lazy, and spent nearly all of my time slumped over my computer," says Rosner, who played up to 18 hours a day, every day, for nearly two years. Rosner nearly threw away a university degree in pursuit of the game. According to a study by the NPD Group, a global market research firm, his gaming obsession isn't unique. Nine out of 10 children play video games. That's 64 million kids—and some of them hit the keyboard or smartphone before they can even string together a sentence. The problem: many researchers believe that excessive gaming before age 21 or 22 can physically rewire the brain. Researchers in China, for example, performed magnetic resonance imaging (MRI) studies on the brains of 18 college students who spent an average of 10 hours a day online, primarily playing games like World of Warcraft. Compared with a control group who spent less than two hours a day online, gamers had less gray matter (the thinking part of the brain). As far back as the early 1990s, scientists warned that because video games only stimulate brain regions that control vision and movement, other parts of the mind responsible for behavior, emotion, and learning could become underdeveloped. A study published in the scientific journal Nature in 1998 showed that playing video games releases the feel-good neurotransmitter dopamine. The amount of dopamine released while playing video games was similar to what is seen after intravenous injection of the stimulant drugs amphetamine or methylphenidate. Yet despite mounting evidence about the cognitive, behavioral, and neurochemical impact of gaming, the concept of game addiction (online or not) is difficult to define. Some researchers say that it is a distinct psychiatric disorder, while others believe it may be part of another psychiatric disorder. The current version of the Diagnostic and Statistical Manual of Mental Disorders, DSM-V, states that more research needs to be done before "Internet Gaming Disorder" can be formally included. Still, experts agree gaming has addictive qualities. The human brain is wired to crave instant gratification, fast pace, and unpredictability. All three are satisfied in video games. "Playing video games floods the pleasure center of the brain with dopamine," says David Greenfield, Ph.D., founder of The Center for Internet and Technology Addiction and assistant clinical professor of psychiatry at the University of Connecticut School of Medicine. That gives gamers a rush—but only temporarily, he explains. With all that extra dopamine lurking around, the brain gets the message to produce less of this critical neurotransmitter. The end result: players can end up with a diminished supply of dopamine. Take a game like that away from addicted adolescents and they often show behavioral problems, withdrawal symptoms, even aggression, according to Dr. Greenfield. But not all gaming is bad. Video games can help the brain in a number of ways, such as enhanced visual perception, improved ability to switch between tasks, and better information processing. "In a way, the video game model is brilliant," says Judy Willis, M.D., neurologist, educator, and American Academy of Neurology (AAN) member based in Santa Barbara, CA. "It can feed information to the brain in a way that maximizes learning," she says. The Developing Brain on Games Video games are designed with a reward structure that's completely unpredictable. The tension of knowing you might score (or kill a warlock), but not knowing exactly when, keeps you in the game. "It's exactly the same reward structure as a slot machine," says Dr. Greenfield. The player develops an unshakeable faith, after a while, that "this will be the time I hit it big." Your Brain on Games: Experimental Evidence That's a powerful draw for an adolescent's developing brain, which is impressionable. "The prefrontal cortex—the locus of judgment, decision-making, and impulse control—undergoes major reorganization during adolescence," explains Tom A. Hummer, Ph.D., assistant research professor in the department of psychiatry at Indiana University School of Medicine in Indianapolis. That executive control center is essential for weighing risks and rewards and for putting the brakes on the pursuit of immediate rewards (like gaming) in favor of more adaptive longer-term goals (like next week's chemistry test). Young adult males who played a violent video game extensively for 2 weeks had lower activity in important brain areas while attempting to control behavior, compared to those who played no video games. This region of the brain doesn't reach maximum capacity until age 25 or 30, which may explain why young people are more likely to engage in hours of play while ignoring basic needs like food, sleep, and hygiene. Without mature frontal lobes to draw on, adolescents and teens are less able to weigh negative consequences and curb potentially harmful behavior like excessive video gaming, which also impacts frontal lobe development. Violent video games are of concern to many experts. In a study of 45 adolescents, playing violent video games for only 30 minutes immediately lowered activity in the prefrontal regions of the brain compared to those who participated in a non-violent game. Previous research showed that just 10–20 minutes of violent gaming increased activity in the brain regions associated with arousal, anxiety, and emotional reaction, while simultaneously reducing activity in the frontal lobes associated with emotion regulation and executive control. The dopamine release that comes from gaming is so powerful, say researchers, it can almost shut the prefrontal regions down. That's one reason why gamers like Rosner can play for 18 hours straight. "Kids plop themselves in front of a computer and they'll stay there for 8, 10, 25, 36 hours," says Dr. Greenfield. Anthony Rosner played World of Warcraft up to 18 hours a day for nearly two years. Eventually, he was able to let his alter ego Sevrin walk off into the sunset. And for kids like Rosner, who feel like social outcasts, excelling in the world of gaming can provide a sense of mastery and confidence missing from their actual lives. "When you become one of the top players in a game like World of Warcraft, tens of thousands of players are essentially under you, so you become like a virtual god," explains Dr. Greenfield. "I created a Blood Elf Paladin called Sevrin, set up my own guild—the QT Yacht Club—and treated it like a full-time job, maintaining the website, recruiting new players, and organizing and leading raids," says Rosner, who quickly achieved celebrity status in the gaming community. "People I didn't know would message me and tell me how amazing I was. It was the complete opposite of what I had in real life." Soon World of Warcraft took precedence over everything else. The Learning Brain on Games Practicing anything repetitively physically changes the brain. With time and effort, you get better at the specific task you're practicing, whether it's shooting at the enemy in a video game or hitting a baseball. Those repetitive actions and thoughts stimulate connections between brain cells, creating neural pathways between different parts of your brain. The more you practice a certain activity, the stronger that neural pathway becomes. That's the structural basis of learning. "Use it or lose it" applies not just to muscles in the body, but also the brain. Neural pathways that are not used eventually get pruned. In the early 2000s, most research suggested that perceptual and cognitive training was very specific to the task at hand. That's one of the problems with many brain training tools: it's easy for people to improve on the individual mini-tasks they're given—say, arranging a list in alphabetical order or completing a crossword puzzle—but those tasks don't always translate into better thinking in general. Video games seem to differ from other kinds of brain training. "Unlike some other brain training tools, video games activate the reward centers, making the brain more receptive to change," explains C. Shawn Green, Ph.D., assistant professor of psychology at the University of Wisconsin–Madison. Studies show, for example, playing action video games enhances visual capabilities, such as tracking multiple objects, mentally rotating objects, and storing and manipulating them in the memory centers of the brain. That holds true even for the most maligned action-entertainment games. Such games also require players to think of an overall strategy, perform several tasks simultaneously, and make decisions that have both an immediate and long-term impact. "That's very much like the multi-tasking inherent in most jobs today," says Dr. Willis. "These young people may be better equipped to switch between tasks easily, adapt to new information, and modify their strategy as new input comes in." Useful skills, to be sure, but exercised excessively they can also become problems. After all, when kids become so accustomed to multi-tasking and processing large amounts of information simultaneously, they may have trouble focusing on a lecture in a classroom setting. The Vulnerable Brain on Video Games The very nature of action-entertainment games not only attracts young people with focus, attention, and anger issues (particularly in the case of violent games); it also tends to reinforce these negative behaviors. Anthony Rosner confronted his gaming addiction and turned it into subject matter for two documentary films that help others to understand the problem and how to deal with it. While a number of companies have tried to create beneficial games for children with attention deficit hyperactivity disorder (ADHD), they've had limited success. "It's difficult to make games that are exciting for kids who have attention issues, but not so exciting that the game reinforces ADHD-like behaviors," says Dr. Hummer. Instead, kids with ADHD often play action video games to flood their senses with visual stimulation, motor challenges, and immediate rewards. In this environment, the ADHD brain functions in a way that allows these children to focus, so much so that they don't exhibit symptoms, such as distractibility, while gaming. "One of the big issues from a treatment perspective is: how do you tell a kid who has been running the world online and experiencing high degrees of sensory input to function in the real world, which is not very exciting comparatively?" says Dr. Greenfield. The stakes may be higher for a child with anger and behavior issues who finds solace in violent video games. While experts disagree about what (if any) impact violent games have on actual violent behavior, some research shows a link between playing violent games and aggressive thoughts and behavior. For a kid who already has an aggressive personality, that could be a problem, say experts, since video games reward those aggressive tendencies. In fact, two separate studies found that playing a violent video game for just 10–20 minutes increased aggressive thoughts compared to those who played nonviolent games. However, not all games are equal—and each person's reaction to those games is different, too. "Asking what are the effects of video games is like asking what are the effects of eating food," says Dr. Hummer. "Different games do different things. They can have benefits or detriments depending what you're looking at." For Rosner, gaming was detrimental. His grades suffered, he missed assignments, and he almost failed to complete his first year of college. "Here I was in university, finally able to pursue my dream of becoming a film director, and I was throwing it away," he says. His academic advisor gave him two options: complete all of his essays for the first year within a span of three weeks, or fail and retake the first year. "I didn't want to let myself or my parents down, so I uninstalled World of Warcraft and focused on my work," he says. After turning away from the game, Rosner found other sources of pleasure. He joined a gym, started DJing at his university, and became much more active socially. "I couldn't believe what I had been missing," he says. Ironically, World of Warcraft led Rosner to achieve his dream of making films. His documentary, IRL — In Real Life, chronicles his adventures with Sevrin and how he learned to break free from gaming. More than 1 million people worldwide have viewed his film, which can be seen on YouTube. It has been featured at film festivals, on TV, and in newspapers and magazines. Today, gaming is just one form of entertainment for Rosner. He even plays World of Warcraft occasionally. But gaming no longer controls his life. "People still ask about my character, Sevrin," says Rosner, "but I've realized it's far more rewarding to achieve your potential in real life." Got a Gaming Addiction? The following warning signs may indicate a problem: Spending excessive amounts of time on the computer. Becoming defensive when confronted about gaming. Losing track of time. Preferring to spend more time with the computer than with friends or family. Losing interest in previously important activities or hobbies. Becoming socially isolated, moody, or irritable. Establishing a new life with online friends. Neglecting schoolwork and struggling to achieve acceptable grades. Spending money on unexplained activities. Attempting to hide gaming activities. Gaming: A Parent's Guide With news about video games turning kids into bullies—or zombies—and a growing number of experts warning about the dangers of too much screen time, it may be tempting to ban computers and smartphones altogether. Don't, say experts. If you forbid game play, you'll forfeit any opportunity to influence your children's behavior. A better approach: play with them, says Judy Willis, M.D., a neurologist and member of the American Academy of Neurology based in Santa Barbara, CA, who suggests starting with free online educational games. The key to ensuring your children have a healthy relationship with video games (and, yes, there is such a thing) means ensuring they take advantage of pleasurable experiences outside these games. A few tips: PAY ATTENTION According to David Greenfield, Ph.D., founder of The Center for Internet and Technology Addiction and assistant clinical professor of psychiatry at the University of Connecticut School of Medicine, 80 percent of the time a child spends on the computer has nothing to do with academics. Putting computers, smartphones, and other gaming devices in a central location—and not behind closed doors—allows you to monitor their activities. Learn how to check the computer's search history to confirm what your children have been doing on the Internet. ESTABLISH BOUNDARIES Set—and enforce—limits on screen time. "Kids are often unable to accurately judge the amount of time they spend gaming. Further, they are unconsciously reinforced to stay in the game," says Dr. Greenfield, who recommends no more than one or two hours of screen time on weekdays. Taking advantage of firewalls, electronic limits, and blocks on cell phones and Internet sites can help. START TALKING Discuss Internet use and gaming early on with your kids. Set clear expectations to help steer them in a healthy direction before a problem begins. Communication doesn't necessarily mean a formal talk. Rather, it's about giving your child an opportunity to share their interests and experiences with you. KNOW YOUR KID If your child is doing well in the real world, participating in school, sports, and social activities, then limiting game play may not be as important. The key, say experts, is maintaining a presence in their lives and being aware of their interests and activities. On the other hand, if you have a kid who already has anger issues, you might want to limit violent games, suggests Tom A. Hummer, Ph.D., assistant research professor in the department of psychiatry at Indiana University School of Medicine in Indianapolis. GET HELP For some young people, gaming becomes an irresistible obsession. If your child is showing signs of a video game addiction, help is available. Treatment options range from limited outpatient therapy to intensive residential boarding schools and inpatient programs.
" Useful skills, to be sure, but exercised excessively they can also become problems. After all, when kids become so accustomed to multi-tasking and processing large amounts of information simultaneously, they may have trouble focusing on a lecture in a classroom setting. The Vulnerable Brain on Video Games The very nature of action-entertainment games not only attracts young people with focus, attention, and anger issues (particularly in the case of violent games); it also tends to reinforce these negative behaviors. Anthony Rosner confronted his gaming addiction and turned it into subject matter for two documentary films that help others to understand the problem and how to deal with it. While a number of companies have tried to create beneficial games for children with attention deficit hyperactivity disorder (ADHD), they've had limited success. "It's difficult to make games that are exciting for kids who have attention issues, but not so exciting that the game reinforces ADHD-like behaviors," says Dr. Hummer. Instead, kids with ADHD often play action video games to flood their senses with visual stimulation, motor challenges, and immediate rewards. In this environment, the ADHD brain functions in a way that allows these children to focus, so much so that they don't exhibit symptoms, such as distractibility, while gaming. "One of the big issues from a treatment perspective is: how do you tell a kid who has been running the world online and experiencing high degrees of sensory input to function in the real world, which is not very exciting comparatively?" says Dr. Greenfield. The stakes may be higher for a child with anger and behavior issues who finds solace in violent video games. While experts disagree about what (if any) impact violent games have on actual violent behavior, some research shows a link between playing violent games and aggressive thoughts and behavior. For a kid who already has an aggressive personality, that could be a problem, say experts, since video games reward those aggressive tendencies. In fact, two separate studies found that playing a violent video game for just 10–20 minutes increased aggressive thoughts compared to those who played nonviolent games.
yes
Informatics
Are video games harmful to children's mental health?
yes_statement
"video" "games" have a "harmful" impact on "children"'s "mental" "health".. "children"'s "mental" "health" is negatively affected by "video" "games".
https://timesofindia.indiatimes.com/readersblog/lifecrunch/harmful-impact-of-the-internet-on-children-27202/
Harmful Impact of the Internet on Children
Harmful Impact of the Internet on Children The Internet is not only a source of information but a medium that connects almost every aspect of our life. The Internet is a place of great ease and infinite connectivity, but also a place of great vulnerability. In a world of the internet, we live through infinitely complex virtual networks, barely able to trace where our information is coming from and going and thus posing a threat not only to our lives but also to the lives of our children. The digital world plays an immense role in the day-to-day activities of 21st-century children. The U.S. National Library of Medicine National Institutes of Health (NIH) reports teens between the ages of 8 and 28 to spend about 44.5 hours a week in front of a digital screen, according to another report 23 per cent of kids have reported that they feel that they are addicted to video games. As the younger generation is growing more and more tech-savvy and dependent on the internet, they are being exposed to the various malicious side of the internet. Online Games: With fast internet and the advancement of gaming technology, the internet got bombarded with thousands of online games. Even though online games are a fun way to socialize encouraging teamwork, it comes paired with risks of it own that parents need to be aware of. Without the right guidance and supervision, games can expose children to risks such as game addiction and addiction of any kind is harmful. Now that games come paired with the option of buying in-game perks, these perks are tempting and can only be bought with real tangible money. Children end up buying these perks blowing a hole in their parent’s pockets. There have been cases around the world where young children would buy game credits online without even informing their parents. Parents not being as tech-savvy as their child would come to know of it only after a month or two when they have almost lost all their savings. These addictions are not only affecting the pockets of the parents, there are many more harmful effects of game addiction. There have been many cases where children committed suicide because of their inability to complete various tasks in-game. Blue Whale, a notorious game that forced children to commit suicide as a task, took the lives of many. One needs to understand that these problems arose because of the lack of proper supervision from parents. Parents should understand that taking care of their children even includes monitoring and supervising their activities and always help their children to understand the writers and wrongs of life. Activities and always help their children to understand the writers and wrongs of life. Social Media: We have all seen an immense rise in the number of social media users in recent times. According to the statistics, 90 per cent of teens ages 13-17 years are on social media. 51 per cent of them are daily active on social media, shocking. Isn’t it? Social media platforms were introduced to connect people around the world. However, it is not the same place which was introduced to us. In recent times, we have only witnessed violent, sexual and hateful content dominating these platforms. People could have used it to share positive and learning stuff instead of sharing dark content. This kind of content has a harmful impact on psychology, especially on kids. No parent around the world would want their kids to absorb this kind of content. Especially after seeing the huge rise in the number of depression cases. Parents are quite limited in keeping an eye on the content their kids are absorbing on these platforms. On the very last World Mental Health Day, experts suggested taking a break from social media platforms as these are having a negative impact on mental health Health problems: – Having access to the internet on smart devices led to the overuse of smart devices like laptops or smartphones/ tablets. You would be shocked to know the number of diseases reported among children globally. All this is a result of cutting off physical activities because of playing games or accessing the internet or binge-watching for long hours. There is a huge rise in the number of patients with insomnia, depression, obesity, and eyesight among children. Conclusion: – As mentioned in the above paragraphs, it clearly shows how harmful the internet can become for children. And as a parent, you would all be more worried now and looking for solutions here and there. Recently Joined Bloggers disclaimer Any opinions and views expressed on or through the above content/blogs are those of the designated authors/bloggers and do not necessarily represent views of Times Internet Limited ("Company"). Further, the Company does not make any warranty as to the correctness or reliability of such content.
Harmful Impact of the Internet on Children The Internet is not only a source of information but a medium that connects almost every aspect of our life. The Internet is a place of great ease and infinite connectivity, but also a place of great vulnerability. In a world of the internet, we live through infinitely complex virtual networks, barely able to trace where our information is coming from and going and thus posing a threat not only to our lives but also to the lives of our children. The digital world plays an immense role in the day-to-day activities of 21st-century children. The U.S. National Library of Medicine National Institutes of Health (NIH) reports teens between the ages of 8 and 28 to spend about 44.5 hours a week in front of a digital screen, according to another report 23 per cent of kids have reported that they feel that they are addicted to video games. As the younger generation is growing more and more tech-savvy and dependent on the internet, they are being exposed to the various malicious side of the internet. Online Games: With fast internet and the advancement of gaming technology, the internet got bombarded with thousands of online games. Even though online games are a fun way to socialize encouraging teamwork, it comes paired with risks of it own that parents need to be aware of. Without the right guidance and supervision, games can expose children to risks such as game addiction and addiction of any kind is harmful. Now that games come paired with the option of buying in-game perks, these perks are tempting and can only be bought with real tangible money. Children end up buying these perks blowing a hole in their parent’s pockets. There have been cases around the world where young children would buy game credits online without even informing their parents. Parents not being as tech-savvy as their child would come to know of it only after a month or two when they have almost lost all their savings. These addictions are not only affecting the pockets of the parents, there are many more harmful effects of game addiction. There have been many cases where children committed suicide because of their inability to complete various tasks in-game.
yes
Informatics
Are video games harmful to children's mental health?
yes_statement
"video" "games" have a "harmful" impact on "children"'s "mental" "health".. "children"'s "mental" "health" is negatively affected by "video" "games".
https://yvpc.sph.umich.edu/video-games-influence-violent-behavior/
Do Video Games Influence Violent Behavior? - Michigan Youth ...
Do Video Games Influence Violent Behavior? An op-ed article appeared recently in the The New York Times discussing the Supreme Court’s decision to strike down California’s law barring the sale or rental of violent video games to people under 18. The author, Dr. Cheryl Olson, describes how the proposed law was based on the erroneous assumption that such games influence violent behavior in real life. Dr. Olson suggests that the deliberately outrageous nature of violent games, though disturbing, makes them easily discernible from real life and suggests that the interactivity could potentially make such games less harmful. She raises the question of how these two behaviors can be linked if youth violence has declined over the last several years while violent video game playing has increased significantly during the same period. This analysis ignores the fact that such variation may be explained by factors other than the link between the two. A spurious variable–a third variable that explains the relationship between two other variables—may explain the negative correlation of video game playing and violent behavior. As one example, socioeconomic status may explain both a decline in violent behavior and an increase in video game playing. More affluent youth have the means and time to buy and play video games, which keeps them safely inside while avoiding potentially violent interactions on the street. Dr. Olsen also cites several studies that have failed to show a connection between violent video game playing and violent behavior among youth. This conclusion, however, may not be as clear cut as it appears. Youth violence remains a significant public health issue The decline of youth violence notwithstanding, it remains a significant public health issue that requires attention.Youth homicide remains the number one cause of death for African-American youth between 14 and 24 years old, and the number two cause for all children in this age group. Furthermore, the proportion of youth admitting to having committed various violent acts within the previous 12 months has remained steady or even increased somewhat in recent years (http://pediatrics.aappublications.org/content/108/5/1222.full.pdf+html). Although the Columbine tragedy and others like it make the headlines, youth are killed everyday by the hands of another. A more critical analysis of the link between video game playing and violence is necessary for fully understanding a complex problem like youth violent behavior that has many causes and correlates. Studies support a link between violent video games and aggressive behavior Researchers have reported experimental evidence linking violent video games to more aggressive behavior, particularly as it relates to children who are at more sensitive stages in their socialization. These effects have been found to be particularly profound in the case of child-initiated virtual violence. In their book, Violent Video Game Effects on Children and Adolescents, Anderson, Gentile, and Buckley provide an in depth analysis of three recent studies they conducted comparing the effects of interactive (video games) versus passive (television and movies) media violence on aggression and violence. In one study, 161 9- to 12-year olds and 354 college students were randomly assigned to play either a violent or nonviolent video game. The participants subsequently played another computer game in which they set punishment levels to be delivered to another person participating in the study (they were not actually administered). Information was also gathered on each participant’s recent history of violent behavior; habitual video game, television, and move habits, and several other control variables. The authors reported three main findings: 1) participants who played one of violent video games would choose to punish their opponents with significantly more high-noise blasts than those who played the nonviolent games; 2) habitual exposure to violent media was associated with higher levels of recent violent behavior; and 3) interactive forms of media violence were more strongly related to violent behavior than exposure to non-interactive media violence. The second study was a cross-sectional correlational study of media habits, aggression-related individual difference variables, and aggressive behaviors of an adolescent population. High school students (N=189) completed surveys about their violent TV, movie, and video game exposure, attitudes towards violence, and perceived norms about violent behavior and personality traits. After statistically controlling for sex, total screen time and aggressive beliefs and attitudes, the authors found that playing violent video games predicted heightened physically aggressive behavior and violent behavior in the real world in a long-term context. In a third study, Anderson et al. conducted a longitudinal study of elementary school students to examine if violent video game exposure resulted in increases in aggressive behavior over time. Surveys were given to 430 third, fourth, and fifth graders, their peers, and their teachers at two times during a school year. The survey assessed both media habits and their attitudes about violence. Results indicated that children who played more violent video games early in a school year changed to see the world in a more aggressive way and also changed to become more verbally and physically aggressive later in the school year. Changes in attitude were noticed by both peers and teachers. Bushman and Huesmann, in a 2006 Pediatrics and Adolescent Medicine article, examined effect size estimates using meta-analysis to look at the short- and long-term effects of violent media on aggression in children and adults. They reported a positive relationship between exposure to media violence and subsequent aggressive behavior, aggressive ideas, arousal, and anger across the studies they examined. Consistent with the theory that long-term effects require the learning of beliefs and that young minds can easier encode new scripts via observational learning, they found that the long-term effects were greater for children. In a more recent review, Anderson et al. (2010) also analyzed 136 studies representing 130,296 participants from several countries. These included experimental laboratory work, cross-sectional surveys and longitudinal studies. Overall, they found consistent associations between playing violent video games and many measures of aggression, including self, teacher and parent reports of aggressive behavior. Although the correlations were not high (r=0.17-0.20), they are typical for psychological studies in general and comparable with other risk factors for youth violence suggested in the 2001 Surgeon General’s Report on youth violence. Violent video games may increase precursors to violent behavior, such as bullying Although playing violent video games may not necessarily determine violent or aggressive behavior, it may increase precursors to violent behavior. In fact, Dr. Olson points out that violent video games may be related to bullying, which researchers have found to be a risk factor for more serious violent behavior. Therefore, video game playing may have an indirect effect on violent behavior by increasing risk factors for it. Doug Gentile notes that the only way for violent video games to affect serious criminal violence statistics is if they were the primary predictor of crime, which they may not be. Rather, they represent one risk factor among many for aggression (http://www.apa.org/monitor/2010/12/virtual-violence.aspx). Should video games be regulated? L. Rowell Huesmann (2010) points out that violent video game playing may be similar to other public health threats such as exposure to cigarette smoke and led based paint . Despite not being guaranteed, the probability of lung cancer from smoking or intelligence deficits from lead exposure is increased. Nevertheless, we have laws controlling cigarette sales to minors and the use of lead-based paint (and other lead-based products such as gasoline) because it is a risk factor for negative health outcomes. Huesmann argues the same analysis could be applied to video game exposure. Although exposure to violent video games is not the sole factor contributing to aggression and violence among children and adolescents, it is a contributing risk factor that is modifiable. Violent behavior is determined by many factors Finally, most researchers would agree that violent behavior is determined by many factors which may combine in different ways for different youth. These factors involve neighborhoods, families, peers, and individual traits and behaviors. Researchers, for example, have found that living in a violent neighborhood and experiencing violence as a victim or witness is associated with an increased risk for violent behavior among youth. Yet, this factor alone may not cause one to be violent and most people living in such a neighborhood do not become violent perpetrators. Similarly, researchers have found consistently that exposure to family violence (e.g., spousal and child abuse, fighting and conflict) increases the risk for youth violent behavior, but does not necessarily result in violent children. Likewise, researchers have found that first person killing video game playing is associated with increased risk for violent behavior, but not all the time. Yet, constant exposure to violence from multiple sources, including first person violent video games, in the absence of positive factors that help to buffer these negative exposures is likely to increase the probability that youth will engage in violent behavior. Despite disagreements on the exact nature of the relationship between violent video game playing and violent or aggressive behavior, significant evidence exists linking video game playing with violent behavior and its correlates. Although we are somewhat agnostic about the role of social controls like laws banning the sale of violent video games to minors, an argument against such social controls based on the conclusion that the video games have no effect seems to oversimplify the issue. A more in-depth and critical analysis of the issue from multiple perspectives may both help more completely understand the causes and correlates of youth violence, and provide us with some direction for creative solutions to this persistent social problem. Share this: Comments 92 Regardless of the ambivalence towards legislation regulating video games, there is clearly the opportunity and necessity for parental monitoring of their children’s video gaming. The market for violent video games is clearly driven by the fact that people are buying them, assuming that most young people depend on their parents for their expendable income, we can assume that parents are buying the games for their children either directly or indirectly, therefore the ultimate regulation of thier use must come from within the family. I would suggest that the best prevention intervention would involve educating parents about the effects of these games and keeping them abreast of the latest offerings. Being a college student who plays video games from time to time, I can honestly say that violence in video games has come a long way. When looking at video game history there was once a time where such a game as pong was entertaining as well as non-violent. My first encounter with violence in any video game let alone any kind of violence dates back to around 1995, when my uncle would let me play Doom on a computer. This game although very graphic was quite entertaining for a five year old like myself at the time. The only thing was that it did not have some of the settings that most violent games have now. Now we are seeing games such the infamous Grand Theft Auto allowing you to car jack individuals and do multitudes of drive-byes while trying to escape the police. In some ways I will agree with the fact that rating video games has done some good, but we still have young kids acquiring violent games through other means. Like stated by prior comments the change that would greatly impact how violent games are controlled has to come from parents and the household. At the end of the day we need hard evidence that will deter parents from letting their kids play these games as well as some kind of movement that includes game manufactures who need to realize that there games are impacting younger individuals differently then there intended audience. i play video games all the time and i have never changed no matter how graphic the game is and i dont think games cause eny missbehavior im 16 and i think that games have nothig to do with anger stress or bulling Thats the point of this whole article. It just depends on everything. Honestly, its the peers that cause the violent behavior. All seriousness. We go by what we are taught when we are young. Video games are a peer. And if you have a negative personal peer and a negative object peer, you are tend to be more aggressive than one who has only one negative input. i play video games like grand theft auto and to be honest, i havent changed from it, infact im currently a student at college and im a well mannered person, i am a kind friendly person and will only fight or be violent if someone starts a fight with me or is violent towards me! I am fifteen and against violent video games because of my past and nature. I know for a fact that I enjoy non-graphic violence and that I receive it through anime and two differnet video games, but if I didn’t limit myself and didn’t stay away from graphic first person shhoting games I would be totally different. I would definitely NOT be an all A student or the president of a club. I would be the smart ass girl with purple hair, a knife in her pocket, all F’s and picking a fight with anyone stupid enough to even say my name in the morning. So….it’s difficult to say which, but, I think I agree with this article, but it definitely isn’t all media, it also depends on the person and their environment and personality. Are you kidding? You think your middle school success is because of your lack of violent video games? Are you just typing so you can be included in the thread? I was valedictorian of my High School and a team leader of my football team, that of which we won cites three years in a row. Yet, all through High School i played Call of Duty (probably one of the most violent video games out there today) competitively and still managed to achieve everything I did. It isn’t the games that make a person’s personality, but on the other hand, at a younger age I could see how it MAY be a contributing factor to slight aggression. Adolescence have parents for a reason. They need to take on some responsibility not preventing their child from playing certain games if they can see a emotion change in their child. Personally, I find the accusation of video games being the reasoning behind violence in real life absurd, and a little insulting. Honestly, it is just a stereotype formed by some people. my brother plays online games such as call of duty black ops and he gets abused because he wins games agenced other they threaten him like saying ” im going to find where u live and kill you”. he is 20 and they are like 17 plus it shows how other people are useing weak behaviour and my brother is showing strong behaviour by not saying nothing back to them weak people. CoD….really… those are for the kids. The God of War Series are one of the bloodiest games. CoD…not so much. Video games can have an effect on people if they allow them to. It’s a choice. All psychological. true true. also, the games tht are most violent dont even have human phisics included in them. Like Mortal Kombat, can you rip someone in half with your bare hands (if you can i dont wanna see tht)? Grand Theft auto is anyone’s only come back to if videogames cause violent behavior. What I’m trying to say is basically, you cant even do most of the stuff in games. Also, it’s not like everyone tht plays a violent video game goes out and says, “Huh, tht game was awesome. I think im gonna go out and run over people with my car and all of a sudden buy a gun. Yeah that’ll just make my day…” you know i think its pretty retarted how people think that video games cause violence, NO PEOPLE IT DOES NOT!!! Dumb people dp things on their own get your kids off the games and go spend time with them fools! I am 15 years old and I play games like halo or gears of war and all that stuff, but I have in fact gotten less aggressive over time. That fact that violence in video games make people violent is wrong, it is all up to social and family life, as well as your personality This really should be a no brainer approach to improving adult mother-son relationships. This week I met with a new client to talk about how ADHD was impacting her life and begin helping her design initial ADHD coaching strategies to better manage these challenges. Enter the specific URL address of the site you would like to permit or block, and click ”Add”. People.I’m truly amused by how mixed this article is. First of all, violent video games don’t create violent children. Proof of this is all over where most studies find that violent children tend to prefer violent video games. If there is a predisposition to a behavior, simple psychology shows you that a subject will tend to go towards that behavior in any way possible. Also, a few studies I found (Ten-country comparison suggests there’s little or no link between video games and gun murders inside the Washington Post is one of them) show the EXACT OPPOSITE correlation. Just some info from a HS senior working on a trend paper. I would really like to point something out here. If violent video games cause violence now, then what caused it before? Was there a significant rise in domestic violence when video game violence was introduced? Also, what would really drive a person to kill others? Playing a game that involved killing virtual people, or severe mental\physical trauma caused by real people that they interacted with in person on a regular basis. What really caused violence is mental instability, not violent video games. Although, someone who was mentally unstable who played a violent video game might become provoked, the general population would not. What most people see as “violence” while playing is often just frustration, which can cause violent behavior, no matter the source. Chess could cause violence in the same way. What really needs to be done is this: improve the reliability and effectiveness of out mental health institutions, and pay more attention to those who need it. More thorough tests and more ease of access to help. Because almost all violence can be traced back to a point when someone needed help, but did not, or could not, get it. Violent video games definitely are regulated, which is why they have a rating and require ID before purchase, but I think it’s almost entirely up to the parents to prevent violent behavior. Great read. My friend just pointed me to this article since we were looking for some arguments on regulating video games. This article brings up some valid points about children who play video games. For example, kids which act violent tend to act violently more often when given access to violent media. I’m truly enjoying thhe design and layout of your blog. It’s a very easyy on the eyes which makes it much more enjoyable for me to come here and visit more often. Did you hire out a developer to create your theme? Exceptional work! Have you ever considered about including a little bit more than just your articles? I mean, what you say is important and everything. Nevertheless just imagine if you added some great images or video clips to give your posts more, “pop”! Your content is excellent but with images and clips, this site could undeniably be one of the most beneficial in its field. Superb blog! I severely doubt that video games have much of an effect on kids… and I imagine that violent kids prefer violent games, not that violent games make for violent kids. Sometimes we need to appreciate that our kids aren’t all angels before such trivial things take a hold of them and make them do nasty things…
I am 15 years old and I play games like halo or gears of war and all that stuff, but I have in fact gotten less aggressive over time. That fact that violence in video games make people violent is wrong, it is all up to social and family life, as well as your personality This really should be a no brainer approach to improving adult mother-son relationships. This week I met with a new client to talk about how ADHD was impacting her life and begin helping her design initial ADHD coaching strategies to better manage these challenges. Enter the specific URL address of the site you would like to permit or block, and click ”Add”. People. I’m truly amused by how mixed this article is. First of all, violent video games don’t create violent children. Proof of this is all over where most studies find that violent children tend to prefer violent video games. If there is a predisposition to a behavior, simple psychology shows you that a subject will tend to go towards that behavior in any way possible. Also, a few studies I found (Ten-country comparison suggests there’s little or no link between video games and gun murders inside the Washington Post is one of them) show the EXACT OPPOSITE correlation. Just some info from a HS senior working on a trend paper. I would really like to point something out here. If violent video games cause violence now, then what caused it before? Was there a significant rise in domestic violence when video game violence was introduced? Also, what would really drive a person to kill others? Playing a game that involved killing virtual people, or severe mental\physical trauma caused by real people that they interacted with in person on a regular basis. What really caused violence is mental instability, not violent video games. Although, someone who was mentally unstable who played a violent video game might become provoked, the general population would not. What most people see as “violence” while playing is often just frustration, which can cause violent behavior, no matter the source. Chess could cause violence in the same way. What really needs to be done is this: improve the reliability and effectiveness of out mental health institutions, and pay more attention to those who need it. More thorough tests and more ease of access to help.
no
Informatics
Are video games harmful to children's mental health?
no_statement
"video" "games" do not "harm" "children"'s "mental" "health".. "children"'s "mental" "health" is not negatively impacted by "video" "games".
https://www.apa.org/news/press/releases/2013/11/video-games
Video games play may provide learning, health, social benefits
WASHINGTON — Playing video games, including violent shooter games, may boost children’s learning, health and social skills, according to a review of research on the positive effects of video game play to be published by the American Psychological Association. The study comes out as debate continues among psychologists and other health professionals regarding the effects of violent media on youth. An APA task force is conducting a comprehensive review of research on violence in video games and interactive media and will release its findings in 2014. “Important research has already been conducted for decades on the negative effects of gaming, including addiction, depression and aggression, and we are certainly not suggesting that this should be ignored,” said lead author Isabela Granic, PhD, of Radboud University Nijmegen in The Netherlands. “However, to understand the impact of video games on children’s and adolescents’ development, a more balanced perspective is needed.” The article will be published in APA’s flagship journal, American Psychologist. While one widely held view maintains playing video games is intellectually lazy, such play actually may strengthen a range of cognitive skills such as spatial navigation, reasoning, memory and perception, according to several studies reviewed in the article. This is particularly true for shooter video games that are often violent, the authors said. A 2013 meta-analysis found that playing shooter video games improved a player’s capacity to think about objects in three dimensions, just as well as academic courses to enhance these same skills, according to the study. “This has critical implications for education and career development, as previous research has established the power of spatial skills for achievement in science, technology, engineering and mathematics,” Granic said. This enhanced thinking was not found with playing other types of video games, such as puzzles or role-playing games. Playing video games may also help children develop problem-solving skills, the authors said. The more adolescents reported playing strategic video games, such as role-playing games, the more they improved in problem solving and school grades the following year, according to a long-term study published in 2013. Children’s creativity was also enhanced by playing any kind of video game, including violent games, but not when the children used other forms of technology, such as a computer or cell phone, other research revealed. Simple games that are easy to access and can be played quickly, such as “Angry Birds,” can improve players’ moods, promote relaxation and ward off anxiety, the study said. “If playing video games simply makes people happier, this seems to be a fundamental emotional benefit to consider,” said Granic. The authors also highlighted the possibility that video games are effective tools to learn resilience in the face of failure. By learning to cope with ongoing failures in games, the authors suggest that children build emotional resilience they can rely upon in their everyday lives. Another stereotype the research challenges is the socially isolated gamer. More than 70 percent of gamers play with a friend and millions of people worldwide participate in massive virtual worlds through video games such as “Farmville” and “World of Warcraft,” the article noted. Multiplayer games become virtual social communities, where decisions need to be made quickly about whom to trust or reject and how to lead a group, the authors said. People who play video games, even if they are violent, that encourage cooperation are more likely to be helpful to others while gaming than those who play the same games competitively, a 2011 study found. The article emphasized that educators are currently redesigning classroom experiences, integrating video games that can shift the way the next generation of teachers and students approach learning. Likewise, physicians have begun to use video games to motivate patients to improve their health, the authors said. In the video game “Re-Mission,” child cancer patients can control a tiny robot that shoots cancer cells, overcomes bacterial infections and manages nausea and other barriers to adhering to treatments. A 2008 international study in 34 medical centers found significantly greater adherence to treatment and cancer-related knowledge among children who played “Re-Mission” compared to children who played a different computer game. “It is this same kind of transformation, based on the foundational principle of play, that we suggest has the potential to transform the field of mental health,” Granic said. “This is especially true because engaging children and youth is one of the most challenging tasks clinicians face.” The authors recommended that teams of psychologists, clinicians and game designers work together to develop approaches to mental health care that integrate video game playing with traditional therapy. Isabela Granic can be contacted by email, cell: 011.31.6.19.50.00.99 or work: 011.31.24.361.2142 The American Psychological Association, in Washington, D.C., is the largest scientific and professional organization representing psychology in the United States. APA's membership includes more than 134,000 researchers, educators, clinicians, consultants and students. Through its divisions in 54 subfields of psychology and affiliations with 60 state, territorial and Canadian provincial associations, APA works to advance the creation, communication and application of psychological knowledge to benefit society and improve people's lives.
Simple games that are easy to access and can be played quickly, such as “Angry Birds,” can improve players’ moods, promote relaxation and ward off anxiety, the study said. “If playing video games simply makes people happier, this seems to be a fundamental emotional benefit to consider,” said Granic. The authors also highlighted the possibility that video games are effective tools to learn resilience in the face of failure. By learning to cope with ongoing failures in games, the authors suggest that children build emotional resilience they can rely upon in their everyday lives. Another stereotype the research challenges is the socially isolated gamer. More than 70 percent of gamers play with a friend and millions of people worldwide participate in massive virtual worlds through video games such as “Farmville” and “World of Warcraft,” the article noted. Multiplayer games become virtual social communities, where decisions need to be made quickly about whom to trust or reject and how to lead a group, the authors said. People who play video games, even if they are violent, that encourage cooperation are more likely to be helpful to others while gaming than those who play the same games competitively, a 2011 study found. The article emphasized that educators are currently redesigning classroom experiences, integrating video games that can shift the way the next generation of teachers and students approach learning. Likewise, physicians have begun to use video games to motivate patients to improve their health, the authors said. In the video game “Re-Mission,” child cancer patients can control a tiny robot that shoots cancer cells, overcomes bacterial infections and manages nausea and other barriers to adhering to treatments. A 2008 international study in 34 medical centers found significantly greater adherence to treatment and cancer-related knowledge among children who played “Re-Mission” compared to children who played a different computer game. “It is this same kind of transformation, based on the foundational principle of play, that we suggest has the potential to transform the field of mental health,” Granic said. “This is especially true because engaging children and youth is one of the most challenging tasks clinicians face.”
no
Zoogeography
Are wolves native to Africa?
yes_statement
"wolves" are "native" to africa.. africa is home to "wolves".
https://en.wikipedia.org/wiki/African_wolf
African wolf - Wikipedia
It was previously classified as an African variant of the golden jackal (Canis aureus), with at that time at least one subspecies (C. a. lupaster) having been classified as a wolf. In 2015, a series of analyses on the species' mitochondrial DNA and nucleargenome demonstrated that it was, in fact, distinct from the golden jackal, and more closely related to the gray wolf and the coyote (Canis latrans).[5][6] It is nonetheless still close enough to the golden jackal to produce hybrid offspring, as indicated through genetic tests on jackals in Israel,[5] and a 19th-century captive crossbreeding experiment.[7] Further studies demonstrated that it is the descendant of a genetically admixed canid of 72% gray wolf (Canis lupus) and 28% Ethiopian wolf (Canis simensis) ancestry.[8] It plays a prominent role in some African cultures; it was considered sacred in ancient Egypt, particularly in Lycopolis, where it was venerated as a god. In North African folklore, it is viewed as an untrustworthy animal whose body parts can be used for medicinal or ritualistic purposes,[9][10][11] while it is held in high esteem in Senegal's Serer religion as being the first creature to be created by the god Roog.[12] The African wolf is intermediate in size between the African jackals (L. mesomelas and L. adusta) and the small subspecies of gray wolves,[19] with both sexes weighing 7–15 kg (15–33 lb), and standing 40 cm in height.[4] There is however a high degree of size variation geographically, with Western and Northern African specimens being larger than their East African cousins.[19] It has a relatively long snout and ears, while the tail is comparatively short, measuring 20 cm in length. Fur color varies individually, seasonally and geographically, though the typical coloration is yellowish to silvery grey, with slightly reddish limbs and black speckling on the tail and shoulders. The throat, abdomen and facial markings are usually white, and the eyes are amber-colored. Females bear two to four pairs of teats.[4] Although superficially similar to the golden jackal (particularly in East Africa), the African wolf has a more pointed muzzle and sharper, more robust teeth.[5] The ears are longer in the African wolf, and the skull has a more elevated forehead.[20] Various C. lupasterphenotypes, ranging from gracile jackal-like morphs to more robust wolf-like ones. Aristotle wrote of wolves living in Egypt, mentioning that they were smaller than the Greek kind. Georg Ebers wrote of the wolf being among the sacred animals of Egypt, describing it as a "smaller variety" of wolf to those of Europe, and noting how the name Lykopolis, the Ancient Egyptian city dedicated to Anubis, means "city of the wolf".[21][22] An attempt was also made in 1821 to hybridise the two species in captivity, resulting in the birth of five pups, three of which died before weaning. The two survivors were noted to never play with each other, and had completely contrasting temperaments: One pup inherited the golden jackal's shyness, while the other was affectionate toward its human captors.[7] English biologist G.J. Mivart emphasized the differences between the African wolf and the golden jackal in his writings: ... it is a nice question whether the Common Jackal of North Africa should or should not be regarded as of the same species [as the golden jackal] ... Certainly the differences of coloration which exist between these forms are not nearly so great as those which are to be found to occur between the different local varieties of C. lupus. We are nevertheless inclined ... to keep the North-African and Indian Jackals distinct ... The reason why we prefer to keep them provisionally distinct is that though the difference between the two forms (African and Indian) is slight as regards coloration, yet it appears to be a very constant one. Out of seventeen skins of the Indian form, we have only found one which is wanting in the main characteristic as to difference of hue. The ears also are relatively shorter than in the North-African form. But there is another character to which we attach greater weight. However much the different races of Wolves differ in size, we have not succeeded in finding any constant distinctive characters in the form of the skull or the proportions of the lobes of any of the teeth. So far as we have been able to observe, such differences do exist between the Indian and North-African Jackals. The canids present in Egypt in particular were noted to be so much more gray wolf-like than populations elsewhere in Africa that W.F. Hemprich and C.G. Ehrenberg gave them the binomial nameCanis lupaster in 1832. Likewise, T.H. Huxley, upon noting the similarities between the skulls of lupaster and Indian wolves, classed the animal as a subspecies of the gray wolf. However, the animal was subsequently synonymised with the golden jackal by Ernst Schwarz in 1926. The taxonomy of the Jackals in the Near East is still a matter of dispute. On the basis of skeletal material, however, it can be stated that the Wolf Jackal is specifically distinct from the much smaller Golden Jackal.[26] In 1981, zoologist Walter Ferguson argued in favor of lupaster being a subspecies of the gray wolf based on cranial measurements, stating that the classing of the animal as a jackal was based solely on the animal's small size, and predated the discovery of C. l. arabs, which is intermediate in size between C. l. lupus and lupaster.[22] Further doubts over its being conspecific with the golden jackal of Eurasia arose in December 2002, when a canid was sighted in Eritrea's Danakil Desert whose appearance did not correspond to that of the golden jackal or the six other recognized species of the area, but strongly resembled that of the gray wolf. The area had previously been largely unexplored because of its harsh climate and embroilment in the Eritrean War of Independence and subsequent Eritrean–Ethiopian War, though local Afar tribesmen knew of the animal, and referred to it as wucharia (wolf).[13] The animal's wolf-like qualities were confirmed in 2011, when several golden "jackal" populations in Egypt and the Horn of Africa classed as Canis aureus lupaster[19] were found to have mtDNA sequences more closely resembling those found in gray wolves than those of golden jackals.[21] These wolf-like mtDNA sequences were found to occur over a 6,000 km wide area, encompassing Algeria, Mali and Senegal. Furthermore, the sampled African specimens displayed much more nucleotide and haplotype diversity than that present in Indian and Himalayan wolves, thus indicating a larger ancestral population, and an effective extant population of around 80,000 females. Both these studies proposed reclassifying Canis aureus lupaster as a subspecies of the gray wolf.[27] In 2015, a more thorough comparative study of mitochondrial and nucleargenomes on a larger sample of wolf-like African canids from northern, eastern and western Africa showed that they were in fact all distinct from the golden jackal, with a genetic divergence of around 6.7%,[5][28][29] which is greater than that between gray wolves and coyotes (4%) and that between gray wolves and domestic dogs (0.2%).[30] Furthermore, the study showed that these African wolf-like canids (renamed Canis lupaster, or African wolves) were more closely related to gray wolves and coyotes than to golden jackals,[5][31] and that C. l. lupaster merely represents a distinct phenotype of the African wolf rather than an actual gray wolf. The phylogenetic tree below is based on nuclear sequences:[5] It was estimated that the African wolf diverged from the wolf–coyote clade 1.0–1.7 million years ago, during the Pleistocene, and therefore its superficial similarity to the golden jackal (particularly in East Africa, where African wolves are similar in size to golden jackals) would be a case of parallel evolution. Considering its phylogenetic position and the canid fossil record, it is likely that the African wolf evolved from larger ancestors that became progressively more jackal-like in size upon populating Africa on account of interspecific competition with both larger and smaller indigenous carnivores. Traces of African wolf DNA were identified in golden jackals in Israel, which adjoins Egypt, thus indicating the presence of a hybrid zone.[5] The study's findings were corroborated that same year by Spanish, Mexican and Moroccan scientists analyzing the mtDNA of wolves in Morocco, who found that the specimens analyzed were distinct from both golden jackals and gray wolves but bore a closer relationship to the latter.[6] Studies on RAD sequences found instances of African wolves hybridizing with both feral dogs and Ethiopian wolves.[32] In 2017, it was proposed by scientists at the Oslo and Helsinki Universities that the binomial name C. anthus was a nomen dubium, on account of the fact that Cuvier's 1820 description of the holotype, a female collected from Senegal, seems to be describing the side-striped jackal rather than the actual African wolf, and does not match the appearance of a male specimen described by Cuvier in his later writings. This ambiguity, coupled with the disappearance of the holotype's remains, led to the scientists proposing giving priority to Hemprich and Ehrenberg's name C. lupaster, due to the type specimen having a more detailed and consistent description, and its remains being still examinable at the Museum für Naturkunde.[19] The following year, a major genetic study of Canis species also referred to the African wolf as Canis lupaster.[8] In 2019, a workshop hosted by the IUCN/SSC Canid Specialist Group recommended that because the specimen identified as Canis anthus Cuvier, 1820 was uncertain, the species should be known as Canis lupaster Hemprich and Ehrenberg, 1832 until Canis anthus can be validated.[33] In 2018, whole genome sequencing was used to compare members of the genus Canis. The study supports the African wolf being distinct from the golden jackal, and with the Ethiopian wolf being genetically basal to both. Two genetically distinct African wolf populations exist in northwestern and eastern Africa. This suggests that Ethiopian wolves – or a close and extinct relative – once had a much larger range within Africa to admix with other canids. There is evidence of gene flow between the eastern population and the Ethiopian wolf, which has led to the eastern population being distinct from the northwestern population. The common ancestor of both African wolf populations was a genetically admixed canid of 72% gray wolf and 28% Ethiopian wolf ancestry. There is evidence of gene flow between African wolves, golden jackals, and gray wolves. One African wolf from the Egyptian Sinai Peninsula showed high admixture with the Middle Eastern gray wolves and dogs, highlighting the role of the land bridge between the African and other continents in canid evolution. African wolves form a sister clade to Middle Eastern gray wolves based on mitochondrial DNA, but to coyotes and gray wolves based on nuclear DNA.[8] Between 2011 and 2015, two mtDNA studies found that the Himalayan wolf and Indian wolf were closer to the African wolf than they were to the Holarctic gray wolf.[21][5] In 2017, a study of mitochondrial DNA, X-chromosome (maternal lineage) markers and Y-chromosome (male lineage) markers found that the Himalayan wolf is genetically basal to the Holarctic gray wolf. The Himalayan wolf shares a maternal lineage with the African wolf, and possesses a unique paternal lineage that falls between the gray wolf and the African wolf.[34] Although in the past several attempts have been made to synonymise many of the proposed names, the taxonomic position of West African wolves, in particular, is too confused to come to any precise conclusion, as the collected study materials are few. Prior to 1840, six of the 10 supposed West African subspecies were named or classed almost entirely because of their fur color.[35] The species' display of high individual variation, coupled with the scarcity of samples and the lack of physical barriers on the continent preventing gene flow, brings into question the validity of some of the West African forms.[35] However, a study showed that the genetic divergence of all of the African wolves occurred between 50,000 and 10,500 years ago, with most occurring between 30,000 and 16,000 years ago during the Late Glacial Maximum (33,000–16,000 years ago). There were very dry conditions across the Sahara during this period. The study proposes that these wolves were isolated in refugia and therefore isolated for hundreds of generations, leading to genetic divergence.[36] A large, stoutly built subspecies with proportionately short ears and presenting a very gray wolf-like phenotype, standing 40.6 cm (16.0 in) in shoulder height and 127 cm (50 in) in body length. The upper parts are yellowish-gray tinged with black, while the muzzle, the ears and the outer surfaces of the limbs are reddish-yellow. The fur around the mouth is white.[27][37] A dwarf subspecies measuring only 12 inches in shoulder height, it is generally of a grayish-yellow color, mingled with only a small proportion of black. The muzzle and legs are more decidedly yellow, and the underparts are white.[37] The African wolf's social organisation is extremely flexible, varying according to the availability and distribution of food. The basic social unit is a breeding pair, followed by its current offspring, or offspring from previous litters staying as "helpers".[15] Large groups are rare, and have only been recorded to occur in areas with abundant human waste. Family relationships among African wolves are comparatively peaceful in relation to those of the black-backed jackal; although the sexual and territorial behavior of grown pups is suppressed by the breeding pair, they are not actively driven off once they attain adulthood. African wolves also lie together and groom each other much more frequently than black-backed jackals. In the Serengeti, pairs defend permanent territories encompassing 2–4 km2, and will vacate their territories only to drink or when lured by a large carcass.[4] The pair patrols and marks its territory in tandem. Both partners and helpers will react aggressively towards intruders, though the greatest aggression is reserved for intruders of the same sex; pair members do not assist each other in repelling intruders of the opposite sex.[4] Threat postures in C. l. lupaster (left) and C. l. anthus (right) The African wolf's courtship rituals are remarkably long, during which the breeding pair remains almost constantly together. Prior to mating, the pair patrols and scent marks its territory. Copulation is preceded by the female holding her tail out and angled in such a way that her genitalia are exposed. The two approach each other, whimpering, lifting their tails and bristling their fur, displaying varying intensities of offensive and defensive behavior. The female sniffs and licks the male's genitals, whilst the male nuzzles the female's fur. They may circle each other and fight briefly. The copulatory tie lasts roughly four minutes. Towards the end of estrus, the pair drifts apart, with the female often approaching the male in a comparatively more submissive manner. In anticipation of the role he will take in raising pups, the male regurgitates or surrenders any food he has to the female. In the Serengeti, pups are born in December–January, and begin eating solid food after a month. Weaning starts at the age of two months, and ends at four months. At this stage, the pups are semi-independent, venturing up to 50 meters from the den, even sleeping in the open. Their playing behavior becomes increasingly more aggressive, with the pups competing for rank, which is established after six months. The female feeds the pups more frequently than the male or helpers do, though the presence of the latter allows the breeding pair to leave the den and hunt without leaving the litter unprotected.[4] The African wolf's life centers around a home burrow, which usually consists of an abandoned and modified aardvark or warthog earth. The interior structure of this burrow is poorly understood, though it is thought to consist of a single central chamber with 2–3 escape routes. The home burrow can be located in both secluded areas or surprisingly near the dens of other predators.[39] African wolves frequently groom one another, particularly during courtship, during which it can last up to 30 minutes. Nibbling of the face and neck is observed during greeting ceremonies. When fighting, the African wolf slams its opponents with its hips, and bites and shakes the shoulder. The species' postures are typically canine, and it has more facial mobility than the black-backed and side-striped jackals, being able to expose its canine teeth like a dog.[4] The vocalisations of the African wolf are similar to those of the domestic dog, with seven sounds having been recorded,[17] including howls, barks, growls, whines and cackles.[4] Subspecies can be recognised by differences in their howls.[17] One of the most commonly heard sounds is a high, keening wail, of which there are three varieties; a long single toned continuous howl, a wail that rises and falls, and a series of short, staccato howls. These howls are used to repel intruders and attract family members. Howling in chorus is thought to reinforce family bonds and establish territorial status.[4] A comparative analysis of African wolf and some gray wolf subspecies' howls demonstrated that the former's howls bear similarities to those of the Indian wolf, being high-pitched and of relatively short duration.[40] The African wolf rarely catches hares, due to their speed. Gazelle mothers (often working in groups of two or three) are formidable when defending their young against single wolves, which are much more successful in hunting gazelle fawns when working in pairs. A pair of wolves will methodically search for concealed gazelle fawns within herds, tall grass, bushes and other likely hiding places.[4] Although it is known to kill animals up to three times its own weight, the African wolf targets mammalian prey much less frequently than the black-backed jackal overall.[4] On capturing large prey, the African wolf makes no attempt to kill it; instead it rips open the belly and eats the entrails. Small prey is typically killed by shaking, though snakes may be eaten alive from the tail end. The African wolf often carries away more food than it can consume, and caches the surplus, which is generally recovered within 24 hours.[39] When foraging for insects, the African wolf turns over dung piles to find dung beetles. During the dry seasons, it excavates dung balls to reach the larvae inside. Grasshoppers and flying termites are caught either in mid-air or by pouncing on them while they are on the ground. It is fiercely intolerant of other scavengers, having been known to dominate vultures on kills – one can hold dozens of vultures at bay by threatening, snapping and lunging at them.[4] The African wolf inhabits a number of different habitats; in Algeria it lives in Mediterranean, coastal and hilly areas (including hedged farmlands, scrublands, pinewoods and oak forests), while populations in Senegal inhabit tropical, semi-arid climate zones including Sahelian savannahs. Wolf populations in Mali have been documented in arid Sahelian massifs.[27] In Egypt, the African wolf inhabits agricultural areas, wastelands, desert margins, rocky areas, and cliffs. At Lake Nasser, it lives close to the lakeshore.[16] In 2012, African wolves were photographed in Morocco's Azilal Province at an elevation of 1,800 meters.[14][3] It apparently does well in areas where human density is high and natural prey populations low, as is the case in the Enderta district in northern Ethiopia.[42] This wolf has been reported in the very dry Danakil Depression desert on the coast of Eritrea, in eastern Africa.[13] The African wolf generally manages to avoid competing with black-backed and side-striped jackals by occupying a different habitat (grassland, as opposed to the closed and open woodlands favored by the latter two species) and being more active during the daytime.[43] Nevertheless, the African wolf has been known to kill the pups of black-backed jackals,[15] but has in turn been observed to be dominated by adults during disputes over carcasses.[17] It often eats alongside African wild dogs, and will stand its ground if the dogs try to harass it.[4] Encounters with Ethiopian wolves are usually antagonistic, with Ethiopian wolves dominating African wolves if the latter enter their territories, and vice versa. Although African wolves are inefficient rodent hunters and thus not in direct competition with Ethiopian wolves, it is likely that heavy human persecution prevents the former from attaining numbers large enough to completely displace the latter.[44] Nevertheless, there is at least one record of an African wolf pack adopting a male Ethiopian wolf.[45] African wolves will feed alongside spotted hyenas, though they will be chased if they approach too closely. Spotted hyenas will sometimes follow wolves during the gazelle fawning season, as wolves are effective at tracking and catching young animals. Hyenas do not take to eating wolf flesh readily; four hyenas were reported to take half an hour in eating one. Overall, the two animals typically ignore each other when no food or young is at stake.[46] Wolves will confront a hyena approaching too closely to their dens by taking turns in biting the hyena's hocks until it retreats.[4] The African golden jackal was depicted as Anubis, Vignette from the Papyrus of Ani, British MuseumWolf-shaped bronze amulet from Egypt's Ptolemaic Period (711–30 BCE) The wolf was the template of numerous Ancient Egyptian deities, including Anubis, Wepwawet and Duamutef.[47] The wolf was sacred in Lycopolis, whose inhabitants would mummify wolves and store them in chambers, as opposed to other areas of Egypt, where wolves were buried at their place of death. According to Diodorus Siculus in Bibliotheca historica, there were two reasons as to why the wolf was held in such high regard, the first being the animal's affinity to the dog, and the second being a legend that told of how Lycopolis received its name after a pack of wolves repelled an Ethiopian invasion. Plutarch noted in his On the Worship of Isis and Osiris that Lycopolis was the only nome in Egypt where people consumed sheep, as the practice was associated with the wolf, which was revered as a god. The importance of the wolf in Lycopolite culture continued through to the Roman period, where images of the animal were minted on the reverse sides of coins. Herodotus mockingly wrote of a festival commemorating Rhampsinit's descent to the underworld where a priest would be led by two wolves to the temple of Ceres.[48] Arab Egyptian folklore holds that the wolf can cause chickens to faint from fear by simply passing underneath their roosts, and associates its body parts with various forms of folk magic: placing a wolf's tongue in a house is believed to cause the inhabitants to argue, and its meat is thought to be useful in treating insanity and epilepsy. Its heart is believed to protect the bearer from wild animal attacks, while its eye can protect against the evil eye.[9] Although considered haram in Islamic dietary laws, the wolf is important in Moroccan folk medicine.[10]Edvard Westermarck wrote of several remedies derived from the wolf in Morocco, including the use of its fat as a lotion, the consumption of its meat to treat respiratory ailments, and the burning of its intestines in fumigation rituals meant to increase the fertility of married couples. The wolf's gall bladder was said to have various uses, including curing sexual impotence and serving as a charm for women wishing to divorce their husbands. Westermarck noted, however, that the wolf was also associated with more nefarious qualities: it was said that a child who eats wolf flesh before reaching puberty will be forever cursed with misfortune and that scribes and saintly persons refrain from consuming it even in areas where it is socially acceptable, as doing so would render their charms useless.[11] The African wolf is not common in Neolithic rock art, though it does occasionally appear; a definite portrayal is shown on the Kef Messiouer cave in Algeria's Tébessa Province, where it is shown feeding on a wild boar carcass alongside a lion pride. It plays a role in Berber mythology, particularly that of the Ait Seghrouchen of Morocco, where it plays a similar role in folktales as the red fox does in Medieval European fables, though it is often the victim of the more cunning hedgehog.[49] The African wolf plays a prominent role in the Serer religion's creation myth, where it is viewed as the first living creature created by Roog, the Supreme God and Creator.[12][50] In one aspect, it can be viewed as an Earth-diver sent to Earth by Roog, in another, as a fallen prophet for disobeying the laws of the divine. The wolf was the first intelligent creature on Earth, and it is believed that it will remain on Earth after human beings have returned to the divine. The Serers believe that, not only does it know in advance who will die, but it traces the tracks in advance of those who will go to funerals. The movements of the wolf are carefully observed, because the animal is viewed as a seer who came from the transcendence and maintains links with it. Although believed to be rejected in the bush by other animals and deprived of its original intelligence, it is still respected because it dared to resist the supreme being who still keeps it alive.[12]
Furthermore, the study showed that these African wolf-like canids (renamed Canis lupaster, or African wolves) were more closely related to gray wolves and coyotes than to golden jackals,[5][31] and that C. l. lupaster merely represents a distinct phenotype of the African wolf rather than an actual gray wolf. The phylogenetic tree below is based on nuclear sequences:[5] It was estimated that the African wolf diverged from the wolf–coyote clade 1.0–1.7 million years ago, during the Pleistocene, and therefore its superficial similarity to the golden jackal (particularly in East Africa, where African wolves are similar in size to golden jackals) would be a case of parallel evolution. Considering its phylogenetic position and the canid fossil record, it is likely that the African wolf evolved from larger ancestors that became progressively more jackal-like in size upon populating Africa on account of interspecific competition with both larger and smaller indigenous carnivores. Traces of African wolf DNA were identified in golden jackals in Israel, which adjoins Egypt, thus indicating the presence of a hybrid zone.[5] The study's findings were corroborated that same year by Spanish, Mexican and Moroccan scientists analyzing the mtDNA of wolves in Morocco, who found that the specimens analyzed were distinct from both golden jackals and gray wolves but bore a closer relationship to the latter.[6] Studies on RAD sequences found instances of African wolves hybridizing with both feral dogs and Ethiopian wolves.[32]
yes
Zoogeography
Are wolves native to Africa?
yes_statement
"wolves" are "native" to africa.. africa is home to "wolves".
https://www.seacrestwolfpreserve.org/types-of-wolves
Types of Wolves | seacrest
types of wolves Wolves are adapted to thrive in a variety of different environments. Gray wolves once roamed across most of the world’s northern hemisphere. Currently, there are two universally recognized species of wolves in the world, the red and the gray. However, there is a growing debate over if some subspecies are actually distinct species of wolves. Wolves once roamed almost all of North America. However, when the settlers arrived from Europe, the extermination of red and gray wolves began. Today, the range and population of all wolves in North America is significantly reduced and varies by region. Although the taxonomy of wolves has been an ongoing debate in the scientific community, there are two universally recognized species of wolves in the world: the gray wolf (Canis lupus) and the red wolf (Canis rufus), both of which are found in the United States. Additionally, recent genomic research suggests there are potentially more distinct species, including the eastern (Algonquin) wolf (Canis lycaon), which was previously considered a subspecies of gray wolf found in eastern North America. ​ United States Current populations of the North American gray wolf are drastically different between Alaska (~8,000-10,000 wolves) and the lower 48 states (~5,000-6,500 wolves). In Alaska, wolves inhabit about 85% of the state, including the mainland and all major islands, and have never been considered endangered in Alaska. Both the rock mountain gray and arctic subspecies call Alaska home. In the lower 48 states, there are 3 subspecies of gray wolves: the rocky mountain gray wolf, the great plains gray wolf and the Mexican gray wolf. The rocky mountain gray wolf is located in the northern Rocky Mountains, while the great plains gray wolves call the Western Great Lakes region their home. The Mexican gray wolf once lived throughout Arizona, New Mexico, Texas and Mexico; however, persecution drove them to near extinction. In 1998, 11 captive-reared wolves were released into a recovery area. As of 2019, there are 113 Mexican gray wolves in the wild and they are considered endangered. You can learn more about the Mexican gray wolf here! The United States is also home to the red wolf, a distinct species from the gray wolf. The red wolf once lived throughout the entire southeast but was driven to near extinction by government-sponsored extermination programs. Unfortunately, the red wolf is an endangered species, with less than 25 wild wolves remaining in North Carolina. However, there are dozens of captive breeding programs working together on a national recovery effort for the red wolf. You can learn more about the red wolf here! There is also research suggesting that there is a 3rd distinct species of wolves in the United States, the eastern wolf (Canis lycaon). The eastern wolf is found in Minnesota, Wisconsin and Michigan and is virtually indistinguishable from gray wolf subspecies in the area by physical, behavioral and ecological traits. Genetic comparison is the only way to distinguish between the eastern wolf and other gray wolf subspecies in the region. ​ Canada Canada is home to the second largest gray wolf population in the world with 60,000 wolves. Currently, Canadian wolves occupy about 90% of their historic range. Interestingly, the 10% of Canada now lacking wolves is primarily near the US-Canada border. The rocky mountain, great plains, and arctic subspecies of gray wolf are all found in Canada. Additionally, the eastern wolf is found here. Mexican Gray Wolf Eastern Wolf Red Wolf Russian Gray Wolf Iberian Gray WOlf Much like in the United States, wolves once roamed throughout much of Europe. However, human conflict and fears grew from myths and religion, and for centuries, humans have persecuted and hunted wolves in Europe. There are several countries in Europe where wolves live, and much like in North America, their range and population vary by region. Currently there are no more than 13,000 wolves in Europe (excluding Russia). There are 5 subspecies of gray wolves that are found in Europe: ​ Eurasian arctic wolf (aka tundra wolf) Russian gray wolf (aka Eurasian gray wolf) Italian gray wolf Indian gray wolf (aka desert wolf) Iberian gray wolf Eurasian arctic wolves are similar to North America's arctic wolves and reside in the northernmost latitudes of Europe. The Russian gray wolf is the largest of the gray wolf subspecies, with individuals averaging between 152-176 lbs, and is found all over Europe and the northern hemisphere of Asia. The Italian gray wolf is native to the Italian peninsula. They are a smaller subspecies, weighing between 55-77 lbs on average. It is estimated there are between 700-1,300 wild Italian wolves. The Indian gray wolf is mostly found in Southwest Asia and India, but populations have expanded into southeastern European countries such as Turkey. There are about 2,500 wild Iberian gray wolf, which roam northern Portugal and northwestern Spain. This subspecies is interesting because it has been isolated from other wolf populations, making it the most genetically distinct European subspecies. The Iberian gray wolf is also the largest wolf population in Western Europe. Iberian gray wolves are between 85-110 lbs on average. Italian Gray WOlf ASIA ASIA Gray wolves once lived throughout most of Asia. Wolves still roam many Asian countries and prey on ungulate species. The range and population of wolves in Asia varies by region. Overall, Asia has around 89,000-105,000 wolves. There are 6 subspecies of gray wolves that are found in Asia: ​ Eurasian arctic wolf (aka tundra wolf) Russian gray wolf (aka Eurasian gray wolf) Indian gray wolf (aka desert wolf) Arabian gray wolf Caspian Sea gray wolf Tibetan gray wolf (aka Himalayan gray wolf) Eurasian arctic wolves are similar to North America's arctic wolves and reside in the northernmost latitudes of Asia. The Russian gray wolf is the largest of the gray wolf subspecies, with individuals averaging between 152-176 lbs, and is found all over Europe and the northern hemisphere of Asia. The Indian gray wolf is mostly found in Southwest Asia and India, but populations have expanded into southeastern Europe. Due to the warmer environments that the Indian gray wolf is native to, it lacks a winter coat seen in other subspecies of gray wolves. It is also a smaller than other gray wolf subspecies. The Arabian gray wolf is the smallest of all the wolves, weighing around 45 lbs on average. This subspecies calls the Arabian Peninsula its home and is well adapted to desert life. Its pack size tends to be small (2-4 individuals) and they are omnivorous. The Caspian Sea gray wolf lives in the Caspian Steppes and is between 77-88 lbs on average. The Tibetan gray wolf is native to China and Nepal and is the most genetically divergent subspecies of gray wolf in the world. These wolves tend to be a lighter brown with more white around their face and on their legs. Tibetan gray wolves occupy territories at higher altitudes and have evolved to withstand low oxygen levels. These wolves weight around 75-77 lbs on average. Interestingly, Tibetan gray wolves howls at a lower frequency and for a shorter duration than other species of gray wolves. Indian Gray Wolf Tibetan Gray Wolf Caspian Gray Wolf Arabian Gray Wolf Africa Africa has the smallest gray wolf populations. There is only 1 subspecies of gray wolves that are found in Africa, the African gray wolf. The African gray wolf was formerly considered a subspecies of golden jackal and lives in several distinct, small regions in northern Africa. However, there is some recent research suggesting there Africa may be home another species of wolves, the Ethiopian or Abyssinian wolf (Canis simensis). The Ethiopian wolf is endangered, with about 420 left in the wild. These wolves are red in color and small, weighing between 28-36 lbs on average. More than half the population of the Ethiopian wolf are found in the Bale mountains, and rodents make up most of their diet. The social behaviors and communication between these wolves is similar to other species of wolves and their average pack size can range from 3-13 members. You can learn more about Ethiopian wolves here. ​ ​"Closest thing to Heaven on Earth!" ​"Being able to learn more about the care and conservation of these beautiful, social creatures is amazing. Better yet, you get to walk around with them and even pet and howl with them. Amazing experience!"
The Indian gray wolf is mostly found in Southwest Asia and India, but populations have expanded into southeastern Europe. Due to the warmer environments that the Indian gray wolf is native to, it lacks a winter coat seen in other subspecies of gray wolves. It is also a smaller than other gray wolf subspecies. The Arabian gray wolf is the smallest of all the wolves, weighing around 45 lbs on average. This subspecies calls the Arabian Peninsula its home and is well adapted to desert life. Its pack size tends to be small (2-4 individuals) and they are omnivorous. The Caspian Sea gray wolf lives in the Caspian Steppes and is between 77-88 lbs on average. The Tibetan gray wolf is native to China and Nepal and is the most genetically divergent subspecies of gray wolf in the world. These wolves tend to be a lighter brown with more white around their face and on their legs. Tibetan gray wolves occupy territories at higher altitudes and have evolved to withstand low oxygen levels. These wolves weight around 75-77 lbs on average. Interestingly, Tibetan gray wolves howls at a lower frequency and for a shorter duration than other species of gray wolves. Indian Gray Wolf Tibetan Gray Wolf Caspian Gray Wolf Arabian Gray Wolf Africa Africa has the smallest gray wolf populations. There is only 1 subspecies of gray wolves that are found in Africa, the African gray wolf. The African gray wolf was formerly considered a subspecies of golden jackal and lives in several distinct, small regions in northern Africa. However, there is some recent research suggesting there Africa may be home another species of wolves, the Ethiopian or Abyssinian wolf (Canis simensis). The Ethiopian wolf is endangered, with about 420 left in the wild. These wolves are red in color and small, weighing between 28-36 lbs on average.
yes
Zoogeography
Are wolves native to Africa?
yes_statement
"wolves" are "native" to africa.. africa is home to "wolves".
https://www.awf.org/wildlife-conservation/ethiopian-wolf
Ethiopian Wolf | African Wildlife Foundation
Ethiopian Wolf Breadcrumb What are Ethiopian wolves? Native to Ethiopia, these long-limbed, slender canids are some of the most endangered animals in Africa. They have a black, bushy tail that can reach up to 40 centimeters in length, pointed ears, and a slender snout. They are tawny red with a white underbelly and blaze on their chests, and also have white fur on their throats, which sweeps up and covers the underside of their muzzle. The female wolves tend to be paler in color than males and are smaller overall. Scientific Name Canis simensis Weight 11 to 20 kilograms (24 to 42 pounds) Size Up to one meter in length (about 3 feet) Lifespan Up to 10 years Habitat Afro-alpine grasslands, rocky areas, and shrublands Gestation 60 to 62 days Predators Humans Found in 7 isolated enclaves Fewer than 440 individuals remaining Annually, only about 60% of dominant females breed successfully Challenges Agriculture is swallowing up Ethiopian wolf habitat. Humans currently pose the largest threat to this species. Subsistence farming in Ethiopia’s highlands is overtaking large swaths of their range, restricting them to higher and higher altitudes. The overgrazing of livestock is only exacerbating this habitat loss. Diseases are taking a toll. Population decline of the Ethiopian wolf is increasingly being tied to diseases, particularly in the Bale Mountains. Since 2008, this Ethiopian wolf population has declined by 30 percent due to consecutive epizootics of rabies and canine distemper. Rabies is a potential threat to all populations of the Ethiopian wolf, while canine distemper remains a serious concern in Bale. Solutions Our solutions to protecting and conserving the Ethiopian wolf: Economic Development Create income alternatives. African Wildlife Foundation is working to establish new mechanisms for ensuring local communities’ livelihoods. Our Simien Mountains Cultural Tourism project is improving infrastructure and accomodations in and around the national park. Increased revenue from community-owned and-operated tourism will reduce dependence on subsistence farming, ensuring Ethiopian wolf habitats stay protected. Community Empowerment Enlist local communities. In the Simien Mountains and three other locations in the Ethiopian highlands AWF engages local communities as “Wolf Ambassadors” to monitor wolves, introduce a report system to understand the causes of livestock predation by carnivores, and undertake rabies vaccinations for domesticated dogs to prevent disease outbreaks from spreading to Ethiopian wolf populations. Behaviors They are family-oriented. Ethiopian wolf packs are groups of extended family members, made up of all the males born into the pack during the previous years and one or two females. During breeding season, commingling between different parks is more common due to habitat saturation and the high potential for inbreeding inside the closely related pack. These interactions are highly vocal, and end when the smaller pack flees from the larger one. Raising Ethiopian wolf pups is a communal activity. Adult Ethiopian wolves in a pack will help raise each other’s pups. Wolf mothers give birth in a den they dug themselves, under a boulder or inside a rocky crevice. These dens usually consist of a highly utilized network of burrows, which can have multiple entrances and be interconnected. Pups are regularly shifted from one den to another. Diet Ethiopian wolves live together, but hunt alone. Unlike other wolf species, the Ethiopian wolf is a solitary hunter. Ethiopian wolf diet consists mainly of the giant mole rats and common grass rats that are abundant in their habitat. On the rare occasion, these canids will hunt cooperatively to bring down young antelopes, lambs, and hares. However, Ethiopian wolves are social animals and form packs of three to 13 individuals — this allows them to defend a territory with enough rodents to feed the entire group. Habitats Where do Ethiopian wolves live? As its name suggests, the Ethiopian wolf is endemic to Ethiopia. Populations are restricted to just seven isolated enclaves in the Ethiopian highlands, with the largest Ethiopian wolf population (120 to 160 individuals) found in the Bale Mountains in southern Ethiopia.
Ethiopian Wolf Breadcrumb What are Ethiopian wolves? Native to Ethiopia, these long-limbed, slender canids are some of the most endangered animals in Africa. They have a black, bushy tail that can reach up to 40 centimeters in length, pointed ears, and a slender snout. They are tawny red with a white underbelly and blaze on their chests, and also have white fur on their throats, which sweeps up and covers the underside of their muzzle. The female wolves tend to be paler in color than males and are smaller overall. Scientific Name Canis simensis Weight 11 to 20 kilograms (24 to 42 pounds) Size Up to one meter in length (about 3 feet) Lifespan Up to 10 years Habitat Afro-alpine grasslands, rocky areas, and shrublands Gestation 60 to 62 days Predators Humans Found in 7 isolated enclaves Fewer than 440 individuals remaining Annually, only about 60% of dominant females breed successfully Challenges Agriculture is swallowing up Ethiopian wolf habitat. Humans currently pose the largest threat to this species. Subsistence farming in Ethiopia’s highlands is overtaking large swaths of their range, restricting them to higher and higher altitudes. The overgrazing of livestock is only exacerbating this habitat loss. Diseases are taking a toll. Population decline of the Ethiopian wolf is increasingly being tied to diseases, particularly in the Bale Mountains. Since 2008, this Ethiopian wolf population has declined by 30 percent due to consecutive epizootics of rabies and canine distemper.
yes
Zoogeography
Are wolves native to Africa?
yes_statement
"wolves" are "native" to africa.. africa is home to "wolves".
https://news.mongabay.com/2018/12/pressure-mounting-for-the-home-of-wild-coffee-and-ethiopian-wolves/
Pressure mounting for the home of wild coffee and Ethiopian wolves
Pressure mounting for the home of wild coffee and Ethiopian wolves In the Harenna Forest of Bale Mountains National Park, a convergence of economic development, climate change and population growth threaten the health of the area’s ecosystem. by Nathan Siegel on 18 December 2018 The region of Bale Park is vital to the survival of endemic flora and fauna, like the mountain nyala (Tragelaphus buxtoni), a large antelope, and some of the planet’s last wild coffee Bale is also home to other ancient forms of livelihood, such as traditional beekeeping. Now there’s a mounting battle to preserve the park, a crucial part of southern Ethiopia’s ecosystem and a watershed source for 12 million people. GOBA, Ethiopia — Abdul Mohamed has an unenviable job. He’s a park ranger tasked with protecting Harenna Forest in southern Ethiopia from illegalbe activities like logging and charcoal production. Mohamed realizes the importance of preserving it for his, and his neighbors’, livelihoods. The problem is that Mohamed and his eight colleagues are in charge of protecting an area that’s about 100 square kilometers (40 square miles). “We need more people,” he says. Patrolling for shenanigans can be dirty work. On the off chance that they catch someone red-handed burning charcoal or cutting trees for firewood or furniture, there are sometimes aggressive confrontations. “People are ready to fight,” Mohamed says. Communities that want to clear land for crops don’t understand the position of the rangers, and often start arguing or threaten violence, he says. “I get very scared!” he adds, a shocking statement from a man with his job and physical stature. Mohamed is on the front lines in a mounting battle to preserve Harenna Forest, part of a crucial ecosystem in southern Ethiopia’s Bale Mountains National Park. Economic development, climate change and population growth threaten the health of the park. The region is vital to the survival of endemic flora and fauna, like the mountain nyala (Tragelaphus buxtoni), a large antelope, and some of the planet’s last wild coffee, as well as ancient forms of livelihood such as beekeeping. The park serves as a watershed for 12 million people, many of whom live in the arid lowlands and rely on the park’s rivers for survival. Harvest season in the birthplace of Arabica Ashraka Kadeem, 20, in a makeshift shelter in Manyate Village on the outskirts of Harenna Forest. Photo by Nathan Siegel for Mongabay. On the banks of the Yadot River, which runs through the edge of Harenna Forest, Ali Nurut is in good spirits. The coffee harvest runs from mid-September to mid-November and is well underway. Vibrant red fruits hang from the thin coffee trees that grow at elevations of 1,300 to 1,800 meters (4,300 to 6,000 feet). The berries stand out from the wiry branches and minimal foliage of the trees. Nurut is careful to pick just the ripe ones, but moves with practiced speed and efficiency. Without stopping to answer this reporter’s questions, Nurut drops the red berries into a long cylindrical straw basket draped over his shoulder. The baskets are ubiquitous in the area this time of year. The land in the forest is owned by the community and then parceled out for families to pick coffee. Unlike many of his friends, Nurut’s plot of land in the rainforest is surrounded by a fence to keep out animals. He’s cleared the brush around his coffee trees, whereas his neighbors’ trees are suffocated by the intense plant life of the rainforest. Clearing allows the trees to produce more and better berries, Nurut says. He harvests about 120 kilograms (260 pounds) of coffee beans per year, but expects an increase this season. In previous years, he’s received the equivalent of $2.75 per kilo ($1.25 per pound). It’s a good thing, too, because “I have seven kids to support with coffee,” he says, still not breaking from the methodical work. Around 3,000 people collect coffee in the national park, says Abdul Kadeem, a member of the Sankate Coffee Association. As the birthplace of Arabica, southern Ethiopia is a well-known source of the world’s 100th most traded product. It is also one of the last places where endemic coffee still grows naturally in the wild. The small, unassuming fruit trees can be found in the midst of the mayhem of the rainforest. But these wild plants, and much of Ethiopia that is suitable for farming coffee, may soon be a thing of the past. In a 2017 study published in Nature, scientists projected increasingly unfavorable changes for coffee-farming areas in Ethiopia, including Harenna Forest. The culprit: climate change. The study predicted that up to 59 percent of such land would no longer be able to grow coffee by the end of the century because it would be too warm and dry. While other regions can move their coffee to higher altitudes, the slopes of the Bale Mountains are too steep to allow such a transition, says Justin Moat, a spatial scientist at the Royal Botanic Gardens Kew, U.K., who led the study. “No matter what kind of scenario [for climate change], by the end of the century, Bale is not looking good,” Moat told Mongabay. But climate change is not the only concern for those harvesting coffee in Harenna Forest. Agriculture, deforestation and livestock grazing also threaten the coffee industry. As park ranger Mohamed notes, people living on the edge of Harenna Forest clear land and use it for agriculture. Coffee drying outside the house of Abdul Kadeem in Manyate Village on the outskirts of Harenna Forest. Photo by Nathan Siegel for Mongabay. For small-scale wild-coffee producers like Nurut, who rely almost entirely on the harvest for their livelihood, that’s bad news. Just 100 meters (330 feet) from where Nurut works his coffee plantation, a herd of cows marches through the forest. It’s not illegal to bring livestock into the forest, but many believe grazing has an adverse impact on the forest. The lowland residents of Bale Mountains National Park have access to fewer resources than those living near the forest. In fact, many herders bring their livestock into the highlands of the national park to graze for weeks at a time. Farm Africa, a nonprofit based in the U.K., is trying to improve the livelihoods of people living in the lowlands so they don’t make that journey. The work is being done in the context of a nearly 7 percent loss of forest annually, according to Farm Africa. The wild coffee picked in Harenna Forest is often mixed with farmed coffee from nearby Delo Menna, to be roasted and exported. Locals say they can taste the difference between farmed and wild coffee. But if the global marketing machine shone a spotlight on wild coffee here and there was a boom in demand, pressure to clear away other plants to maximize the harvest of wild coffee could spike. That would hurt the forest, says Kadeem of the coffee association. A 2006 study published in Forest Ecology and Management seems to corroborate this, finding a 50 percent reduction in the number of species of lianas, small trees and shrubs in areas where plants were cleared to help the coffee grow. Ethiopian wolves roam the mountains High above Harenna Forest, where thousands are at work picking coffee, at an elevation of about 1,800 meters the rare Ethiopian wolf prowls the windswept plateaus. While there are only about 400 of the wolves left in the world, making them Africa’s rarest carnivore species, they roam freely on the Sanetti Plateau in Bale Mountains National Park. Home to the highest road in Africa, the park is one of the few Afromontane areas in Ethiopia. But the wolves are shy creatures, so it was particularly extraordinary to encounter one while hiking on the plateau. The wolf was hunting for rodents no more than 15 meters (50 feet) from us. Access to food isn’t the biggest concern for the survival of the Ethiopian wolf (Canis simensis), which is the size of a medium dog with a striking red coat. They wolves’ main source of food, rats, abound in the National Park; there are 3,000 kilograms of rats per square kilometer (10,500 pounds per square mile) in some meadows. The main threats come from domestic animals and pressure on the land where they hunt. Disease outbreaks brought by dogs that accompany livestock herders in the highlands of the park are one of the most immediate threats to the wolves. There have been successive outbreaks of rabies and canine distemper virus in recent years in the park. Three out of four wolves die in such outbreaks. But the wolves bounce back from each outbreak, suggesting they’ve developed some type of resilience, according to the Ethiopian Wolf Conservation Programme, which also helps vaccinate domestic dogs to avoid outbreaks. A more permanent threat to the wolves, according to the EWCP, is habitat loss. As human populations move farther up the mountains in search of land for farming and grazing, the wolves are squeezed into smaller areas. Barley and potatoes are grown as high as 4,000 meters (13,000 feet) in some areas, and 60 percent of the habitat potentially suitable for the wolves has been converted to agriculture, according to the EWCP. While the Ethiopian wolves are the most prominent endangered species in the park, the area is an endemic hotspot for both animals and plants. Bale Mountains National Park hosts a quarter of Ethiopia’s endemic mammal species, including the entire global population of the big-headed African mole-rat (Tachyoryctes macrocephalus) and largest global population of mountain nyala, as well as 6 percent of the country’s bird species. Almost half of the 1,000 known medicinal plant species in Ethiopia are found in the park. More mammal species would go extinct with the loss of Bale Mountains National Park than with the loss of any other area of equivalent size on the planet, according to UNESCO. Traditional beekeeping only for the brave A beehive placed about 60 ft high in an African Redwood in Harenna Forest, part of the Bale Mountains National Park, in southern Ethiopia. Photo by Nathan Siegel for Mongabay. Harvesting wild coffee isn’t the only traditional practice taking place in the forest. Scattered and camouflaged throughout the tallest canopies in the forest are oblong wooden contraptions: traditional beehives honed over centuries of practice. While every village near Harenna Forest hums with activity from the coffee harvest, these structures are a reminder of the myriad ways locals have lived in harmony with the forest for generations. The beehives are made of two large canoe-like pieces of wood, fastened together with rope. They are then placed high in native African redwood trees, usually 21 meters (70 feet) above ground, to deter daring honey badgers or potential thieves. When the honey is ready to harvest, the beekeeper will scale the tree with a rope and nothing else. It’s a risky business, suited only to the courageous and fit. Kadeem, who has five beehives near his house, hires someone else to harvest for him because he’s scared of heights. He says people have fallen and died. After reaching the hive, the beekeeper burns moss and blows smoke inside to disorientate the bees and prevent them from aggressively protecting their prized creation. Kadeem harvests about 36 kilograms (80 pounds) of honey twice a year and makes the equivalent of $1.50 per kilo (70 cents per pound). It’s less than half of what he gets for coffee, “because the coffee is for export but the honey is for local consumption.” Honey is sold as a remedy for the common cold and to make tej, a mead or honey wine. Honey and bees are a fixture of the traditions of Ethiopia. The northern town of Lalibela, renowned for its monolith churches cut out of the rock in the 13th century, is named after a king who, legend goes, was swarmed by bees at birth but remained unscathed. His mother saw this as a sign of his long reign and named him Lalibela, which translates to “the bees recognize his sovereignty.” With a pinch of salt Ashraka Kadeem, 20, making coffee the traditional way, by roasting and grinding the beans herself, in Manyate Village on the outskirts of Harenna Forest. Photo by Nathan Siegel for Mongabay. As his bees produce their honey in hives high above the ground, Kadeem remains focused on plants at eye level. At his home, the coffee beans have been separated from their berries and laid out in the sun to dry. His wife, Ashraka Kadeem, starts to prepare a traditional brew with already-dried coffee beans. Ashraka is 20 and has been making coffee like this for over half of her life; it appears she could do it with her eyes closed. The beans are cleaned and then roasted on a metal pan over an open fire under a makeshift canopy outside. The beans change in color from pale green to brown then black, accompanied by the familiar scent of roasted coffee. The coffee is ground in a wooden bowl until it is a fine powder. The intensive labor required is why most people buy grounds from nearby towns, but Ashraka maintains it tastes better homemade. The grounds are placed in a jebena, an Ethiopian coffee pot, and water is poured in. Before serving, a pinch of salt is added. Hard-core coffee drinkers the world might do a double take at this point, but it’s hard to argue with artisans in the birthplace of the drink. Full disclosure: the writer worked as photographer for Farm Africa on a previous trip to Ethiopia. Banner image: Adbul Kadeem inspecting drying coffee outside of his house in Manyate Village on the outskirts of Harenna Forest.
While there are only about 400 of the wolves left in the world, making them Africa’s rarest carnivore species, they roam freely on the Sanetti Plateau in Bale Mountains National Park. Home to the highest road in Africa, the park is one of the few Afromontane areas in Ethiopia. But the wolves are shy creatures, so it was particularly extraordinary to encounter one while hiking on the plateau. The wolf was hunting for rodents no more than 15 meters (50 feet) from us. Access to food isn’t the biggest concern for the survival of the Ethiopian wolf (Canis simensis), which is the size of a medium dog with a striking red coat. They wolves’ main source of food, rats, abound in the National Park; there are 3,000 kilograms of rats per square kilometer (10,500 pounds per square mile) in some meadows. The main threats come from domestic animals and pressure on the land where they hunt. Disease outbreaks brought by dogs that accompany livestock herders in the highlands of the park are one of the most immediate threats to the wolves. There have been successive outbreaks of rabies and canine distemper virus in recent years in the park. Three out of four wolves die in such outbreaks. But the wolves bounce back from each outbreak, suggesting they’ve developed some type of resilience, according to the Ethiopian Wolf Conservation Programme, which also helps vaccinate domestic dogs to avoid outbreaks. A more permanent threat to the wolves, according to the EWCP, is habitat loss.
yes
Zoogeography
Are wolves native to Africa?
yes_statement
"wolves" are "native" to africa.. africa is home to "wolves".
https://www.thesafaricollection.com/the-painted-wolves-of-africa/
PAINTED WOLVES OF AFRICA - The Safari Collection
PAINTED WOLVES OF AFRICA Today we celebrate one of Africa’s most efficient hunters: the African Wild Dog. Their scientific name, Lycaon pictus, literally means ‘painted wolf’ and it’s easy to see why. Daubed with splodges of black, brown, yellow and white fur, their patchy coats give individual dogs a unique marking, a little like a fingerprint. Enormous Mickey Mouse like ears and white tipped bushy tails give them a rather endearing appearance. Wild dogs are Africa’s second most endangered carnivore after the Ethiopian wolf. Curious African wild dog On alert Many people who are lucky enough to witness these charming canines on safari are often surprised at how akin to a man’s best friend they first appear to be. Similar in size to a Labrador (although much more streamlined) they are extremely social and playful. They live in groups of anywhere between seven to 40 and greetings between pack members are an exciting occasion for all: an explosion of licking, whining, wagging of tails and rolling around. Seldom aggressive to one another, African wild dogs are, in fact, one of the most caring and strongly bonded species. The whole pack helps to care for pups as well as injured and aging members, with those in need given regurgitated meat following a hunt. As well as their caring and social nature, we love that the ‘top-dog’ is always a female. Packs are ruled by an alpha mating pair, although it’s the female who decides on pack membership, den location and so on. Unlike other mammals, it’s also the female that leaves the family to form their own new pack, whilst males remain in their birth group. Girl power! Despite the similarities many safari-goers observe with their pet pooches, wild dogs cannot be domesticated. Nor can they interbreed with domestic dogs. Tragically, African wild dogs are the most endangered carnivore on the planet. Native only to Africa, they historically roamed 39 countries across the continent. Now, they are found in only 14. Only 6,600 African wild dogs remain on the planet and their population is declining. Wild dogs on the sandy banks of the Ewaso Nyiro River at Sasaab Major threats include habitat loss as humans encroach on their living areas, as well as human wildlife conflict. As a nomadic species, wild dog territories can extend well over 1,000 square kilometres. Persecution from farmers, who often misguidedly blame dogs for livestock deaths, is being tackled by conservation groups such as the Mara Predator Conservation Programme (MPCP), one of the initiatives our Footprint foundation supports. Through monitoring of wild dog packs, community outreach, anti-poison campaigns and a deworming and vaccination programme, the MPCP are helping to secure a future for this endangered species. This year alone, the MPCP have vaccinated over 1,500 domestic cats and dogs in conservancies bordering the edge of the Masai Mara. This is in a bid to reduce rabies and other diseases thought to be easily transmitted to wild dogs. Sorry, your browser doesn't support embedded videos. For those yet to encounter with this rare safari animal, you might be lucky enough to spot them during a safari at Sasaab. A big family pack was recently witnessed hanging out on Leopard Rock, right next to the lodge in Westgate Conservancy. Our guests have also spotted wild dogs in Samburu National Reserve. It is always a mesmerizing moment to come across a pack, lazing in the roadside dust or trotting through the bush. Midday or afternoon seems to be when they are most frequently sighted in and around Sasaab. Through our unique collection of spectacular properties, online shop and Footprint foundation, we unite sustainable tourism with wildlife conservation and communities, making a difference to people and planet. We use cookies to give you the best online experience. By continuing to use our website, you're agreeing to our use of cookies.ContinueRead More Privacy & Cookies Policy Privacy Overview This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience. Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information. Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
PAINTED WOLVES OF AFRICA Today we celebrate one of Africa’s most efficient hunters: the African Wild Dog. Their scientific name, Lycaon pictus, literally means ‘painted wolf’ and it’s easy to see why. Daubed with splodges of black, brown, yellow and white fur, their patchy coats give individual dogs a unique marking, a little like a fingerprint. Enormous Mickey Mouse like ears and white tipped bushy tails give them a rather endearing appearance. Wild dogs are Africa’s second most endangered carnivore after the Ethiopian wolf. Curious African wild dog On alert Many people who are lucky enough to witness these charming canines on safari are often surprised at how akin to a man’s best friend they first appear to be. Similar in size to a Labrador (although much more streamlined) they are extremely social and playful. They live in groups of anywhere between seven to 40 and greetings between pack members are an exciting occasion for all: an explosion of licking, whining, wagging of tails and rolling around. Seldom aggressive to one another, African wild dogs are, in fact, one of the most caring and strongly bonded species. The whole pack helps to care for pups as well as injured and aging members, with those in need given regurgitated meat following a hunt. As well as their caring and social nature, we love that the ‘top-dog’ is always a female. Packs are ruled by an alpha mating pair, although it’s the female who decides on pack membership, den location and so on. Unlike other mammals, it’s also the female that leaves the family to form their own new pack, whilst males remain in their birth group. Girl power! Despite the similarities many safari-goers observe with their pet pooches, wild dogs cannot be domesticated. Nor can they interbreed with domestic dogs. Tragically, African wild dogs are the most endangered carnivore on the planet. Native only to Africa, they historically roamed 39 countries across the continent. Now, they are found in only 14.
no
Zoogeography
Are wolves native to Africa?
yes_statement
"wolves" are "native" to africa.. africa is home to "wolves".
https://www.painteddog.org/
Painted Dog Conservation
Painted dogs are one of the most endangered species in the whole of Africa. Fewer than 7,000 painted dogs are left across the entire continent. They may not be as famous as their trunked, horned, or maned neighbours, but these painted dogs —also known as African wild or hunting dogs—are beautiful, unique, and fascinating social animals. Painted dogs are native to Africa, and aren’t found in the wild anywhere else on the planet. They live in small pockets across a handful of countries including Zimbabwe, the home of Painted Dog Conservation. There are roughly 700 painted dogs here, and we work with local populations of both humans and dogs—via conservation, education, and outreach programs—to help them not only survive here, but thrive. In Zimbabwe, painted dogs are protected under the following Statutory Instruments (SI): Mother knows best: Painted dogs live in matriarchal societies, with packs of up to 30 members all answering to an alpha female. #paintedwolfwednesday #repost @paintedwolf_org ・・・ P A I N T E D . W O L F . ⠀⠀⠀⠀⠀⠀⠀⠀⠀ W E D N E S D A Y ⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ Demystifying the ‘myth’ that is #Lycaon in the painted wolf’s Scientific name: #Lycaonpictus!⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ A painted wolf’s scientific name is Lycaon pictus translating to wolf-like creature in #Greek, and ‘pictus’ meaning painted in #Latin. ⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ #Paintedwolves are the only living species of the genus Lycaon, most other #canines, like grey wolves, are #canis. ⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ The name Lycaon comes from King Lycaon from Ancient Greek #mythology. ⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ Lycaon, the sadistic King of #Arcadia, who lost favour with #Zeus after he tied to feed the God of Lightning human flesh at a banquet. To punish Lycaon, Zeus turned him into a half-man, half-wolf-like creature. Ever wonder were the #werewolf myth comes from?⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ While painted wolves might share a name with the cruel lupine king, they themselves are not wanting killers but empathetic, caring and tender to their pack mates. There has been no recorded attack on humans by a painted wolf in the wild. ⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ Help create awareness for the plight of the painted wolf by participating in #PaintedWolfWednesday. ⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ Photograph by award-winning, Nicholas Dyer Photography. “The continued existence of wildlife and wilderness is important to the quality of life of humans.” - Jim Fowler. To help save painted dogs follow LINK IN BIO. #mondaymotivation #savethepainteddog #pdc #painteddogs #africanwilddog #wildlife #environment Photo Credit: @wildxonex Always a beautiful sight to experience. Painted dogs are one of the most endangered species in the whole of Africa. Save the painted dog, LINK IN BIO. #Lukodetpack #endangeredspecies #painteddog Video Credit: Washington Moyo #paintedwolfwednesday #repost @paintedwolf_org ・・・ P A I N T E D . W O L F . W E D N E S D A Y #Paintedwolves don’t just communicate with hoo-calls but scent-marking too. The #wolves have scent glands located on their anus, genitals, and face. #Lemurs, #cheetah, hyena, lion and even #rhino using #scentmarking as a form of communication. Scent marking is when a #mammal produces a hormonal substance from a scent gland, or in the form of urine or feces, and deposits it in a prominent area. These glands allow painted wolves to communicate sexual readiness, gender, age and health with pack mates and other painted wolves. Unlike cheetah that uses scent posts, painted wolves tend to prefer scent marking on tall grass. Painted wolves scent mark more in the heart of their home range and less on the periphery of their territory. Wolves of higher social rank, like the #alphas, tend to scent mark more than subordinate pack members. Help create awareness for the plight of the painted wolf by participating in #PaintedWolfWednesday. Image by award-winning @nicholasdyerphotography The #dogrun, every second Saturday of the Month. Running to save an endangered species. Check the dates and join us if you are in the neighbourhood on the next run. #conservationmeetsfitness #zimparks #forestrycommisionzimbabwe #dete #crossdete #thedogrun #endangeredspecies #savethepainteddog #pdc #painteddog Question: What do you like about painted dogs? Me: Their big rounded lovely ears make them so attentive every time😍 You:....................? #whatdoyoulikeaboutpainteddogs #savethepainteddog #pdc #painteddog #aficanwilddog #paintedwolf Photo Credit: David Graham. On the frontline of conservation. #repost @peterblinston ・・・ Two out of the ten dogs in the Wexau Pack. A great two days in the field. Culminated in successfully Collaring adult male “Peace” #mambanjeprimaryschool kids were so happy to be picked up this morning coming to the Iganyana Children’s Bush Camp where they will stay four days. Here, they will learn about conservation and wildlife in a way they have never before. We believe that engaging children (and adults) in local communities surrounding Hwange National Park is the right way to stop painted dog (and other wildlife) threats recurring in subsequent generations. “We can not expect kids to care or let alone take action to conserve something they do not have an appreciation of or love.” - @peterblinston To support the Iganyana Children’s Bush Camp kindly follow LINK IN BIO. #mambanje #mondaymotivation #iganyanachildrensbushcamp #kids #conservation #wildlife #environment #savethepainteddog #pdc #2020 Video Credit📹: @davidkuvawoga Ngweshla pack feeding on a Kudu in Linkwasha yesterday. As you can see with the tall grass, Hwange has been receiving a generous portion of rains in the past few days, hopefully it keeps up the trend so that locals' crops grow and reduce the threat of poaching. #ngweshlapack #savethepainteddog #pdc #linkwasha Video Credit: Geshem Njamba Sights and scenes from the pre-camp meeting/assessment with kids from Mambanje Primary School who are due to come for a four-day free camp at our Iganyana Children’s Bush Camp next week. Before kids come to Iganyana Children’s Bush Camp, a pre-camp is conducted to prepare them in terms of what is expected of them and what to expect at the camp during their four-day stay. A pre-camp assessment is also done, kids answer a questionnaire about the environment, wildlife and conservation in general. After the camp, they will be presented with the same questionnaire to assess the impact of the camp. Always, the results show that kids come out of the camp better knowledgeable about, and with a positive attitude towards the environment and wildlife. An important aspect that will ensure the survival of wildlife and the habitat Hwange is endowed with. Follow LINK IN BIO to support the Iganyana Children's Bush Camp. #savethepainteddog #pdc #iganyanachildrenbushcamp #kids #education #conservation #environment #wildlfe #painteddogs #mambanjeprimaryschool #milliontrees2020 #trees #paintedwolfwednesday Photo Credit📸: @davidkuvawoga #paintedwolfwednesday #Repost @paintedwolf_org (@get_repost) ・・・ P A I N T E D . W O L F . ⠀⠀⠀⠀⠀⠀⠀⠀⠀ W E D N E S D A Y ⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ ‘To know them is to love them,’ – Africa’s most successful #predator has a softer side. Being empathetic and caring, the #paintedwolf pack is as good as their weakest member, and unlike #lions, they sympathetically care for the frail. ⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ In the wild, a serious injury or illness is a #deathknell for even the hardiest of African #animals but not painted wolves. ⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ The sick, injured and old wolves, along with puppies, take preference at the kill and will be tenderly cared for and nursed back to health. ⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ There are cases of #wolves surviving on three legs and recovering from serious injury due to the #empathy and dedication of their #pack mates.⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ If you can be anything in this #world, be kind, like the painted wolf. #Humanity has a lot to learn from these sociable and loving #canines. ⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ Share the love this #PaintedWolfWednesday. Image by @nicholasdyerphotography “It is our deep felt belief that in order to make a difference and win in conservation, it is paramount to change lives.” — Wilton Nsimango -PDC Education and Community Development Programs Manager. @painted_dog_conservation aim to directly benefit local people by providing a way for them to earn more and access nutritionally balanced and reliable meals. To these ends, we build and establish nutritional gardens with irrigation systems next to boreholes in the local communities whose children attend our Children's Bush Camp. With your help, we aim to drill and manage more boreholes. To help change lives, follow LINK IN BIO. #mondaymotivation #kids #community #savethepainteddog #painteddogs #pdc #development Photo Credit📸: Molly Feltner. #paintedwolfwednesday #Repost @paintedwolf_org (@get_repost) ・・・ P A I N T E D . W O L F . ⠀⠀⠀⠀⠀⠀⠀⠀⠀ W E D N E S D A Y ⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ My, what big #teeth you have! ⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ Relative to body size #paintedwolves have the largest #premolars of any living #carnivore, second to the #spottedhyena. ⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ Painted wolves are considered to be hypercarnivores, which means 70% of their diet is meat. As a result of the high-protein diet, their premolars have evolved to become more carnassial, meaning blade-like. ⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ Having teeth that are adapted to holding and slicing flesh is an advantage for the #wolves when taking down prey. ⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ Quick cutting teeth also reduce the time at the kill. Being slightly-built - the faster the pack can eat and move on, the less likely they are to come into conflict with other big #predators, like #lion and #hyena. ⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ Image by award-winning, @NicholasDyerPhotography. ⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ Share your #paintedwolfwednesday stories with us. #paintedwolfwednesday #Repost @paintedwolf_org (@get_repost) ・・・ P A I N T E D . W O L F . W E D N E S D A Y ⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ What to expect when a #paintedwolf is expecting… ⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ Painted wolves produce surprisingly large litters for their body size. After a gestation period of about 71–73 days the #alphafemale will give birth to between eight and eleven #puppies. Each pup clocks in at 300 – 350 grams and is born blind. ⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ #Paintedwolves do not have an equal gender-ratio. The sex ratio of a litter tends to be male-biased. This means more male puppies are born relative to females pups per litter. ⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ The #lactating #alpha will remain close to her pups for the first month. After four weeks regurgitated meat is added to the menu. #Pups are fully weaned at two months and will remain bound to the #den for a further four to eight weeks. ⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ Image by award-winning, Nicholas Dyer Photography. Share your #PaintedWolfWednesday stories with us!
Painted dogs are one of the most endangered species in the whole of Africa. Fewer than 7,000 painted dogs are left across the entire continent. They may not be as famous as their trunked, horned, or maned neighbours, but these painted dogs —also known as African wild or hunting dogs—are beautiful, unique, and fascinating social animals. Painted dogs are native to Africa, and aren’t found in the wild anywhere else on the planet. They live in small pockets across a handful of countries including Zimbabwe, the home of Painted Dog Conservation. There are roughly 700 painted dogs here, and we work with local populations of both humans and dogs—via conservation, education, and outreach programs—to help them not only survive here, but thrive. In Zimbabwe, painted dogs are protected under the following Statutory Instruments (SI): Mother knows best: Painted dogs live in matriarchal societies, with packs of up to 30 members all answering to an alpha female. #paintedwolfwednesday #repost @paintedwolf_org ・・・ P A I N T E D . W O L F . ⠀ ⠀⠀⠀⠀⠀⠀⠀⠀ W E D N E S D A Y ⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ Demystifying the ‘myth’ that is #Lycaon in the painted wolf’s Scientific name: #Lycaonpictus!⠀ ⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ A painted wolf’s scientific name is Lycaon pictus translating to wolf-like creature in #Greek, and ‘pictus’ meaning painted in #Latin. ⠀ ⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ #Paintedwolves are the only living species of the genus Lycaon, most other #canines, like grey wolves, are #canis. ⠀
no
Zoogeography
Are wolves native to Africa?
yes_statement
"wolves" are "native" to africa.. africa is home to "wolves".
https://animaldiversity.org/accounts/Canis_anthus/
Canis anthus
Geographic Range African golden wolves (Canis anthus), which were considered the same species as Eurasian golden jackals (Canis aureus) until 2015, are found across northern Africa. Their range extends east to west from Somalia to Senegal and north to south from Algeria to Kenya. Thus, golden wolves occupy the Palearctic and Ethiopian faunal regions. Because golden wolves are a highly mobile species, their wide range was likely colonized naturally. Their historic range is unknown. (Karssene, et al., 2018; Koepfli, et al., 2015; Moehlman and Hayssen, 2018; Yirga, et al., 2017) Habitat African golden wolves live in elevations from 0 to nearly 5,000 m. In the eastern part of their range, golden wolves primarily live in high elevations from 2,200 to 4,620 m. However, in the Sahara Desert, they can be found anywhere between sea level and 4,459 m, in isolated mountains, which are estimated to be refugia for golden wolves in the face of climate change. (Brito, et al., 2009; Yalden, et al., 1996) Because of their generalist behavior and tolerance of dry habitats, golden wolves can be found in a wide range of habitats, including grasslands, coniferous temperate forests, invasive Eucalyptus forests, mangroves, dry plateaus, savannas, deserts, and semi-arid environments. However, their preferred habitat seems to be grasslands. Though golden wolves can occupy many habitats, the limiting factor seems to be access to water sources. In Tunisia, for example, differences in golden wolf distribution were best explained by the availability of water. In the Sahara Desert, golden wolves were found most frequently in areas with an annual rainfall of over 1,000 mm and were one of the few canids found in temperatures below 10°C. The only known unsuitable habitats seem to be extremely arid regions and dune fields. (Brito, et al., 2009; Karssene, et al., 2019; Moehlman and Hayssen, 2018; Yalden, et al., 1996) Golden wolves do not occur solely in natural habitats: depending on the area, golden wolves can rely a great deal on human settlements, such as agricultural fields and rural areas. A study in northern Ethiopia found that golden wolf density can actually increase with increasing human density, rather than decreasing to avoid human activity, as is documented for many other canids. This is thought to occur because, in these areas, humans have depleted much of the native prey sources on which golden wolves rely. Therefore, golden wolves resort to feeding on human waste products in these areas. Golden wolves are thus urban exploiters, suggesting that golden wolves may thrive in the face of a growing human population. (Yirga, et al., 2017) Physical Description African golden wolves were originally considered to be the same species as Eurasian golden jackals (Canis aureus). However, a study comparing mitochondrial DNA, microsatellites, sex chromosomes, and whole genomes showed that golden wolves have a unique gene pool from golden jackals and therefore constitute a different species. African wolves (Canis lupaster) were also concluded to be the same species as African golden wolves. This study also found that African golden wolves are more closely related to gray wolves (Canis lupus) than golden jackals. This close relation to gray wolves is interesting, considering that no gray wolves are found in Africa, but it is speculated that much of the canid diversity in Africa originated from Eurasian “wolf-like” colonizers that eventually went extinct. The first taxonomic description of an African golden wolf by Frédéric Cuvier in 1820 was also the first account that used the binomial nomenclature used for African golden wolves today: Canis anthus. Additionally, though Cuvier recognized African golden wolves as a separate species from golden jackals, the scientific community did not consider them as such until Koepfli et al.’s paper was published in 2015. (Koepfli, et al., 2015; Tedford, et al., 2009) Golden wolves generally have an overall coat color of golden or pale yellow, dark tawny, or gray, depending on their local habitat. For example, the first golden wolf ever described lived in mountainous terrain and had a gray coat, whereas golden wolves in desert habitat tend to be golden. Regardless of overall coat color, most golden wolves have yellow markings and individual hairs with black, white, and tan bands. Their legs, tails, the back of their ears, and the top of their muzzles are all tan in color, and a back stripe runs down the upper third of their tails, which have black tips. Their throats, chests, stomachs, the inner sides of their legs, and the undersides of their jaws are white. Their hair is longest on their necks and tails, shortest on their heads and legs, and of intermediate length on the rest of their bodies. Hair runs from front to back across the whole body except between the front legs, where it instead runs back to front. Additional morphological characteristics include bushy tails, a thick under wool during winter, pale yellow to amber eyes, 7 to 8 mammary glands on females, and a dental formula of i 3/3, c 1/1, p 4/4, m 2/3. (Koepfli, et al., 2015; Moehlman and Hayssen, 2018) Some sexual dimorphism occurs in golden wolves, mostly in terms of body size. Males have the same coloring and hair patterns as females, but are larger in size: males have a head and body length of 75 to 89.3 cm, a tail length of 20 to 34.7 cm, and weigh 6.3 to 15 kg. Females have a head and body length of 68 to 82.2 cm, a tail length of 20 to 29 cm, and weigh 6.5 to 10 kg. In general, females have 12% less body mass than males. There is also some sexual dimorphism in skull length in populations in East Africa, but not in North African populations. (Moehlman and Hayssen, 2018) Golden wolves in East Africa are smaller than golden wolves in West Africa, but otherwise look alike morphologically. There are also some small seasonal morphological differences, such as growing thick undercoats during winter and the emergence of a faint “black saddle” on their backs during some seasons. Overall, however, adult golden wolves look more or less the same, though individual identification may be possible by differences in the white markings on a their chests and throats. Even between age classes, golden wolves look very similar; adults can only be differentiated from juveniles by skeletal features. Adults have a high sagittal ridge on the front of their skulls that juveniles lack, and adults exhibit more tooth wear than young golden wolves. (Koepfli, et al., 2015; Moehlman and Hayssen, 2018) Compared to Eurasian golden jackals (Canis aureus) - which scientists originally believed were the same species as golden wolves - African golden wolves look very similar in craniodental anatomy, size, and color. However, golden wolves have smaller muzzles and premolars, larger molars, and narrower, more pointed canines than golden jackals. Additionally, the entire lower third of the tails of golden jackals is black, whereas the tails of golden wolves are only black on their tips. The two species also differ geographically, as golden jackals are only found in Europe and golden wolves are only found in Africa. (Koepfli, et al., 2015) On the eastern end of their range, golden wolves coexist with silver-backed jackals (Canis mesomelas) and side-striped jackals (Canis adustus), both of which are roughly the same size as African golden wolves. However, there are visible morphological differences between the three species: silver-backed jackals are easily identified by their red sides and legs and the silver “saddles” they have on their backs, and side-striped jackals have shorter ears, a gray stripe on their sides, and white-tipped tails. In comparison, golden wolves do not have a saddle, except for a vague black one in some seasons. They are also more gray and tan than red, do not have a side stripe, and have black-tipped tails. (Moehlman and Hayssen, 2018) Sexual Dimorphism male larger Range mass 6.3 to 15 kg 13.88 to 33.04 lb Range length 68 to 89.3 cm 26.77 to 35.16 in Reproduction African golden wolves are monogamous, like many other canid species, though this can be somewhat flexible depending on the abundance of resources and shifting population characteristics. Pair bonds last for a lifetime, and a group will thus consist of the mated pair and their previous offspring, which help to raise young. When defending their territory, mated pairs will fight off intruders intrasexually: males attack other males and females attack other females. It is speculated that this may occur because an individual male wants to make sure he is the only one mating with his female, to ensure the pups he helps raise are his. Meanwhile, an individual female wants to make sure her male does not mate with other females, so that he will fully invest in helping to raise her pups. Thus, territoriality may help enforce monogamous pairs. (Moehlman, 1987; Moehlman and Hayssen, 2018) Both male and female golden wolves reach reproductive maturity at 10 to 11 months of age. Their breeding season is from October to December, with parturition occurring from December to March, sometimes even stretching on into April and May. Copulatory ties last for a few minutes, followed by a gestation period of about 63 days. Litter sizes can range from 1 to 9 pups, averaging around 6 pups per litter. (Moehlman, 1987; Moehlman and Hayssen, 2018) At birth, golden wolf pups are around 189 g. They are born blind and it takes 8 to 11 days for their eyes to open. Tooth eruption also occurs at 11 days. Golden wolf pups are born in underground dens, in which they stay until they are three weeks of age. Dens can have multiple openings, which are about 2 to 3 m long and about 1 m deep. Mothers stay in their dens with their pups and are supplemented by their mates. In some families, 11-to-18-year-old offspring from previous litters help raise new pups. The presence of these "helpers" has been shown to increase pup protection and provisioning, and thus survival. Both parents and their helpers assist in socializing the pups. (Moehlman, 1987; Moehlman and Hayssen, 2018) Even once pups emerge from their dens, they still rely on milk from their mother. However, they are also introduced to regurgitated food during this time. This food comes from their mothers and other adults in the group. Pups remain near their dens until they wean at 8 to 10 weeks of age. They begin assisting actively with foraging around 14 weeks of age. About 70% of pups will stay with their parents for up to two years and become helpers. During this time, they will not engage in breeding activity, even when they reach sexual maturity. (Moehlman, 1987; Moehlman and Hayssen, 2018) Only 30% of pups disperse before the next litter is born. It is speculated that most pups do not disperse right away because, due to high population densities at which golden wolves live, young pups would have a hard time finding a mate and establishing a territory of their own. However, once a juvenile does disperse from its family group, it is not yet known how it then finds a mate and establishes its own territory. (Moehlman, 1987) As typical of canids, male golden wolf parental investment is high. Mated pairs are monogamous and raise their young together. Between birth and weaning, pups rely completely on the milk of their mothers. However, their fathers and previous offspring from the mating pair, will bring food back for mothers and defend dens. Both parents also assist in socializing the pups and, after the pups are weaned, regurgitate food for them. (Moehlman, 1987; Moehlman and Hayssen, 2018) Lifespan/Longevity The maximum known lifespan of an African golden wolf in the wild was observed to be 14 years; in captivity, the maximum lifespan is 18 years. However, usual lifespan in the wild ranges from 6 to 8 years, with an average lifespan of 7 years. Many golden wolves die as pups, as they are especially susceptible to disease and den flooding at this time. Little is known about what limits the lifespan of adults. (Moehlman, 1987; Moehlman and Hayssen, 2018) Range lifespan Status: wild 14 (high) years Range lifespan Status: captivity 18 (high) years Typical lifespan Status: wild 6 to 8 years Behavior African golden wolves are solitary, until they find a mate. Once a pair bond is formed, that pair stays together for life. The group size associated with a mating pair will grow and swell. This depends on how many pups they have and how many offspring stay to help raise the next batch of pups. In summary, golden wolves typically live in groups of two, though this number can increase with the presence of pups and helpers. Strong intraspecific food competition usually selects against larger group sizes, though when food is abundant, large groups have been observed sharing a scavenged carcass. (Moehlman, 1987; Moehlman and Hayssen, 2018) Golden wolves are primarily diurnal - they are mostly active during the day, dawn, and dusk, and are not normally seen at night. Golden wolves are also highly mobile, with males seeming to move farther than females. One male was documented to move at least 230 km - with a high of 465 km - in 98 days. Additionally, in Tunisia there is high genetic diversity among golden wolves, suggesting that connectivity and dispersal capabilities are high between populations. Not much is known about how golden wolves find one another, including mates, or how they interact with non-family conspecifics. (Karssene, et al., 2018; Karssene, et al., 2019; Moehlman and Hayssen, 2018; Yirga, et al., 2017) Home Range Golden wolves are territorial and generally keep territories of about 0.39 to 5 km^2, though they have also been documented to stray past territorial borders in order to feed on carcasses. Territories are nestled within larger home ranges, the size of which depends on the age of individuals and the type of habitat. Juveniles tend to have a much larger home range than adults, because of their need to spread out in search of mates and territory of their own. In woodland habitats, adult pairs have an average home range of 2.4 km^2, while dispersing juveniles have a much bigger home ranges, ranging from 5.6 to 21.7 km^2. Home range size in mountain habitats is much more variable, with adults having home ranges anywhere between 7.9 and 48.2 km^2 and dispersing juveniles having home ranges anywhere between 24.2 and 64.8 km^2. Both members of an adult pair will mark and defend their territory. They keep strict boundaries, though territories tend to overlap when individuals are part of a social group. Territories are generally held for about 8 years. (Moehlman and Hayssen, 2018) Communication and Perception Scent markings and vocalizations are the primary ways that African golden wolves communicate with each other. These forms of communication are important for marking territory, mating, predator defense, and locating family members. These actions may be coupled with other signals, such as visual displays. For example, when an individual golden wolf marks its territory, it will urinate on specific landmarks with a raised leg, rather than in a squatting position, to show any golden wolves that may be watching that it is the holder of this particular territory. African golden wolf vocalizations consist of howls, used for finding family members and asserting dominance, and growls and barks, which are used to warn family members of approaching predators. Barks are also used to stay in contact with group members during hunts for larger prey. Greeting ceremonies and grooming are also important ways of socializing. (Eaton, 1969; Moehlman and Hayssen, 2018) Food Habits African golden wolves primarily feed on wild boars (Sus scrofa) of all ages, though golden wolves likely only feed on adult boars as carrion, due to the dangers of actively hunting adult boars. Plant material also makes up a significant part of the diets of golden wolves. This includes various fruits, seeds, leaves and grasses for digestion and a source of water. Rabbits (Oryctolagus cuniculus) and livestock such as domestic sheep (Ovis aries) are also fairly common, as well as hares (Lepus capensis) and cats (Felis lybicalcafus) to a lesser extent. They have also been documented to eat birds (both wild and domesticated), rodents, and, more rarely, beetles. Thus, African golden wolves can be categorized as omnivores. (Amroun, et al., 2006; Eddine, et al., 2017; Karssene, et al., 2019) Adult pairs hunt together, but otherwise most golden wolves hunt alone. Individual golden wolves have been documented bringing down ungulates 4 to 5 times larger than themselves, though the success rate of mating pairs is higher than that of lone individuals. Adult pairs are also able to go after larger prey, such as Thomspon’s gazelle (Eudorcus thomsonii) and Abdim’s storks (Ciconia abdimii). If food is widely available, groups of up to 18 have been documented scavenging carcasses, but no documentation has been made of golden wolves hunting in large groups like their close relatives, gray wolves (Canis lupus). (Moehlman and Hayssen, 2018) To hunt rodents, golden wolves use their ears to pinpoint the exact location and either leap through the air to catch them or dig them out of their burrows. For ungulate prey, golden wolves generally focus on young, old, or injured individuals. They will chase these weaker individuals away from the rest of the herd, like many other canids. Golden wolves cache any leftovers for later. When a family group is on the hunt they will spread out rather than stay bunched together, with distances of a few hundred meters between each individual. They bark in order to stay in contact with one another during the hunt. (Eaton, 1969; Moehlman and Hayssen, 2018) Predation Spotted hyenas (Crocuta crocuta) are known to kill and eat African golden wolves in East Africa. Hyenas will often try to come into golden wolf dens to eat pups; when golden wolves see a hyena approaching their dens, they give a warning yowl, which alerts all of the adults nearby to chase the hyena away and bite its rump and genitals. Honey badgers (Mellivora capensis) have also been seen near golden wolf dens, but the adults have always chased them away before actual predation could be documented. Humans are also known to kill golden wolves in response to livestock predation. Besides aggressive actions by the adults guarding the den, not much is known about golden wolf anti-predator behavior. (Eddine, et al., 2017; Moehlman and Hayssen, 2018) Ecosystem Roles Due to the widespread loss of many large carnivores in northern Africa, African golden wolves have become more or less the top predator. They are also shown to be opportunists and generalists, allowing them to spread widely across the landscape into many different ecosystems. One consequence of this is that they may be putting a large exploitative competition pressure on other predators, such as the common genet (Genetta genetta). Golden wolves are also speculated to compete exploitatively with fennec foxes (Vulpes zerda) and red foxes (Vulpes Vulpes) and appear to be the superior competitor, as foxes of both species have been shown to abandon water sources and hide whenever golden wolves approach. Because of this competition, it is believed that fennec foxes began to occupy more sandy areas that were less favorable to golden wolves and both fox species shifted to nocturnal activity to avoid golden wolves. Golden wolves are also known to have dietary overlap with black-backed jackals (Canis mesomelas) and side-striped jackals (Canis adustus) in East Africa. The degree to which this competition impacts these three species has yet to be documented. (Amroun, et al., 2006; Eddine, et al., 2017; Karssene, et al., 2019; Moehlman and Hayssen, 2018) Predation by golden wolves may help control rodent and boar populations. They are also scavengers, and thus are important for cycling energy and nutrients throughout their ecosystem. Highly mobile species such as golden wolves are especially important in providing these services across a wide range of systems. Thus, it is likely that golden wolves provide these vital ecosystem services. (Amroun, et al., 2006; Eaton, 1969; Eddine, et al., 2017; Inger, et al., 2016) Golden wolves seem to have a commensalistic relationship with cheetahs (Acinonyx jubatus), as documented in Kenya by Eaton (1969). When golden wolves encounter cheetahs, they will search the vicinity for a kill and, if they find one, scavenge off of it. If there is no kill immediately nearby, golden wolves will remain around the cheetahs for a while, following their movements until either a kill is made or the cheetahs remain inactive for too long and the golden wolves move on. Considering that golden wolves only feed on carcasses abandoned by cheetahs, this does not seem to be a parasitic relationship, as cheetahs are not prohibited from getting as much as they need to eat. There are also no records of cheetahs chasing golden wolves away from kills, suggesting that cheetahs are unaffected by their scavenging. Additionally, cheetah and golden wolf family groups have been documented living near one another without fighting over resources or killing offspring. This suggests that there is little to no competitive relationship between the two species. In fact, it has been documented that, occasionally, golden wolves assist cheetah kills by distracting a herd while the cheetah sneaks up from behind, suggesting the relationship may be mutualistic. However, Eaton (1969) speculates that this behavior likely does not occur outside of their study area because of high competition between scavengers in other areas. Indeed, in Serengeti National Park, golden wolves are rarely observed on carcasses, and scavenged meat makes up only a small portion of their diet. This is thought to occur because of the competition with other scavengers and the danger posed to golden wolves by other scavengers and larger predators that made the kill. (Eaton, 1969; Hunter, et al., 2006) Several golden wolves were shown to have antibodies for canine adenovirus, a liver infection, and canine coronavirus, a highly contagious intestinal disease. Both of these diseases can be easily spread to other canids through feces. Other individuals have tested positive for canine parvovirus, another intestinal disease that can spread to other canids, and canine distemper virus, a virus that affects the respiratory, gastrointestinal, and nervous systems. Canine distemper virus is an especially noteworthy disease because it can infect all sorts of other animals, including other canids, felids, and some primates. Other parasites include Coccidia, which are intestinal parasites that can affect canids and felids, as well as hookworms, tapeworms, mange, flukes, ticks, and Toxocara canis, another intestinal parasite that affects canids. (Gherman and Mihalca, 2017; Moehlman and Hayssen, 2018) Economic Importance for Humans: Positive It has been shown that organic waste from humans is a major food source for African golden wolves, which means they can assist with waste removal. In fact, they have been documented in northern Ethiopia, along with spotted hyenas (Crocuta crocuta), to remove organic waste that may be infected, therefore sanitizing rural areas. Studies of golden wolves have also given us a better understanding of how the domestication of dogs may have taken place. (Amroun, et al., 2006; Eaton, 1969; Yirga, et al., 2017) Positive Impacts research and education Economic Importance for Humans: Negative Similar to problems with gray wolves (Canis lupus) in the United States, predation by African golden wolves on livestock is a huge issue for rural communities in Africa. Golden wolves have been predating increasingly on livestock, and thus farmers have started retaliating. Between 2014 and 2015, farmers killed over 200 wolves. This is a serious problem in some areas, like Tunisia, where livestock has been documented to make up a significant part of the diets of golden wolves. However, relative frequency of livestock in their diet seems to primarily correlate with degree of livestock protection, suggesting that tighter management of livestock may be all that is required to solve this problem. (Eddine, et al., 2017; Karssene, et al., 2019) Additionally, golden wolves are hosts for the protozoan parasite Babesia gibsoni, which is commonly found in domestic dogs. Golden wolves are also reservoirs to Hepatozoon canis, as well as fleas. If golden wolves continue to become more frequent in human settlements, their presence could increase the spread of these parasites, and all of the other diseases mentioned above. These diseases could be spread to pets, other golden wolves and canids that congregate in human areas, and even livestock. (Gherman and Mihalca, 2017; Maronpot and Guindy, 1970; Yirga, et al., 2017) Some diseases that golden wolves carry, like flatworms, have been shown to also infect humans. Golden wolves are reservoirs to parasites like filarioids, which are responsible for pink eye and various lung diseases in humans, and guinea worms, which cause severe pain where the worm migrates as well as nausea and vomiting in humans. An increase of golden wolves in human settlements may lead to an increase in infections, which could be especially devastating for rural communities that may not have the means to treat them medically. (Gherman and Mihalca, 2017) Conservation Status African golden wolves are listed as least concern but declining on the IUCN red list, and are not listed under the CITES appendices or the US Endangered Species Act. Durant et al. (2011) also documented a significant long-term decline in golden wolf populations in the Serengeti. Reasons for their decline may be due to overkill by hunters and poaching, both of which occur in the range of golden wolves, and retaliatory killing by farmers over predation of livestock. All of this is aided by the increased stock of automated weapons in places like Ethiopia. It is speculated that many golden wolves are also being affected by predator control programs for other species, primarily through the consumption of poisoned carcasses. Additionally, vehicular collisions were the source of death for at least fifty golden wolves in the Sahara Desert, which will likely have larger implications as countries develop and roads become more intricate and widespread. (Brito, et al., 2009; Durant, et al., 2011; Eddine, et al., 2017; Moehlman and Hayssen, 2018; Yalden, et al., 1996) Some countries in the range of golden wolves are currently or frequently in a state of war and other extreme conflict, which leads to increased habitat loss and fragmentation. Even the most remote regions, which usually experience little human presence, are affected by war, as opposing sides use these areas to gain tactical advantage. This pushes animals from places that may have once served as refugia. Because of this loss of habitat and refugia, many animals are locally extirpated. Additionally, the number of illegal killings were shown to increase drastically after a couple of years of war. The list of species killed likely includes golden wolves, because of their status as livestock predators. Outside of war-torn areas, widespread habitat loss due to human settlement, expansion, and over-grazing by livestock also occurs in the range of golden wolves. However, these are unlikely to have as large of an impact on golden wolf populations compared to the factors discussed above. Human settlements are less threatening to golden wolves likely because of how opportunistic they are, and due to their demonstrated ability to thrive in anthropogenic landscapes. (Amroun, et al., 2006; Brito, et al., 2018; Eddine, et al., 2017) Part of the range of golden wolves is encompassed in the Tlemcen Hunting Reserve in Algeria. There are also several national parks scattered around Ethiopia and Eritrea, though many of these are poorly staffed and thus inadequately enforced. It is likely, however, that golden wolves receive at least some protection when considering all of the parks cumulatively. (Eddine, et al., 2017; Yalden, et al., 1996) Contributors Glossary living in sub-Saharan Africa (south of 30 degrees north) and Madagascar. Palearctic living in the northern part of the Old World. In otherwords, Europe and Asia and northern Africa. acoustic uses sound to communicate agricultural living in landscapes dominated by human agriculture. altricial young are born in a relatively underdeveloped state; they are unable to feed or care for themselves or locomote independently for a period of time after birth/hatching. In birds, naked and helpless after hatching. carrion flesh of dead animals. causes or carries domestic animal disease either directly causes, or indirectly transmits, a disease to a domestic animal chemical uses smells or other chemicals to communicate crepuscular active at dawn and dusk desert or dunes in deserts low (less than 30 cm per year) and unpredictable rainfall results in landscapes dominated by plants and animals adapted to aridity. Vegetation is typically sparse, though spectacular blooms may occur following rain. Deserts can be cold or warm and daily temperates typically fluctuate. In dune areas vegetation is also sparse and conditions are dry. This is because sand does not hold water well so little is available to plants. In dunes near seas and oceans this is compounded by the influence of salt in the air and soil. Salt limits the ability of plants to take up water through their roots. diurnal active during the day, 2. lasting for one day. female parental care parental care is carried out by females fertilization union of egg and spermatozoan forest forest biomes are dominated by trees, otherwise forest biomes can vary widely in amount of precipitation and seasonality. male parental care parental care is carried out by males monogamous Having one mate at a time. mountains This terrestrial biome includes summits of high mountains, either without vegetation or covered by low, tundra-like vegetation. native range the area in which the animal is naturally found, the region in which it is endemic. nomadic generally wanders from place to place, usually within a well-defined range. omnivore an animal that mainly eats all kinds of things, including plants and animals scent marks communicates by producing scents from special gland(s) and placing them on a surface whether others can smell or taste them seasonal breeding breeding is confined to a particular season sedentary remains in the same area social associates with others of its species; forms social groups. solitary lives alone stores or caches food places a food item in a special place to be eaten later. Also called "hoarding" suburban living in residential areas on the outskirts of large cities or towns. swamp a wetland area that may be permanently or intermittently covered in water, often dominated by woody vegetation. tactile uses touch to communicate temperate that region of the Earth between 23.5 degrees North and 60 degrees North (between the Tropic of Cancer and the Arctic Circle) and between 23.5 degrees South and 60 degrees South (between the Tropic of Capricorn and the Antarctic Circle). terrestrial Living on the ground. territorial defends an area within the home range, occupied by a single animals or group of animals of the same species and held through overt defense, display, or advertisement tropical the region of the earth that surrounds the equator, from 23.5 degrees north to 23.5 degrees south. tropical savanna and grassland A terrestrial biome. Savannas are grasslands with scattered individual trees that do not form a closed canopy. Extensive savannas are found in parts of subtropical and tropical Africa and South America, and in Australia. savanna A grassland with scattered trees or scattered clumps of trees, a type of community intermediate between grassland and forest. See also Tropical savanna and grassland biome. temperate grassland A terrestrial biome found in temperate latitudes (>23.5° N or S latitude). Vegetation is made up mostly of grasses, the height and species diversity of which depend largely on the amount of moisture available. Fire and grazing are important in the long-term maintenance of grasslands. urban living in cities and large towns, landscapes dominated by human structures and activity. visual uses sight to communicate viviparous reproduction in which fertilization and development take place within the female body and the developing embryo derives nourishment from the female. Disclaimer: The Animal Diversity Web is an educational resource written largely by and for college students. ADW doesn't cover all species in the world, nor does it include all the latest scientific information about organisms we describe. Though we edit our accounts for accuracy, we cannot guarantee all information in those accounts. While ADW staff and contributors provide references to books and websites that we believe are reputable, we cannot necessarily endorse the contents of references beyond our control. This material is based upon work supported by the National Science Foundation Grants DRL 0089283, DRL 0628151, DUE 0633095, DRL 0918590, and DUE 1122742. Additional support has come from the Marisla Foundation, UM College of Literature, Science, and the Arts, Museum of Zoology, and Information and Technology Services.
2018; Eddine, et al., 2017) Part of the range of golden wolves is encompassed in the Tlemcen Hunting Reserve in Algeria. There are also several national parks scattered around Ethiopia and Eritrea, though many of these are poorly staffed and thus inadequately enforced. It is likely, however, that golden wolves receive at least some protection when considering all of the parks cumulatively. (Eddine, et al., 2017; Yalden, et al., 1996) Contributors Glossary living in sub-Saharan Africa (south of 30 degrees north) and Madagascar. Palearctic living in the northern part of the Old World. In otherwords, Europe and Asia and northern Africa. acoustic uses sound to communicate agricultural living in landscapes dominated by human agriculture. altricial young are born in a relatively underdeveloped state; they are unable to feed or care for themselves or locomote independently for a period of time after birth/hatching. In birds, naked and helpless after hatching. carrion flesh of dead animals. causes or carries domestic animal disease either directly causes, or indirectly transmits, a disease to a domestic animal chemical uses smells or other chemicals to communicate crepuscular active at dawn and dusk desert or dunes in deserts low (less than 30 cm per year) and unpredictable rainfall results in landscapes dominated by plants and animals adapted to aridity. Vegetation is typically sparse, though spectacular blooms may occur following rain. Deserts can be cold or warm and daily temperates typically fluctuate. In dune areas vegetation is also sparse and conditions are dry. This is because sand does not hold water well so little is available to plants. In dunes near seas and oceans this is compounded by the influence of salt in the air and soil. Salt limits the ability of plants to take up water through their roots. diurnal active during the day, 2. lasting for one day.
yes
Zoogeography
Are wolves native to Africa?
yes_statement
"wolves" are "native" to africa.. africa is home to "wolves".
https://animaldiversity.org/accounts/Canis_simensis/
Canis simensis: INFORMATION - ADW
Geographic Range The Ethiopian wolf has a very restricted range. It is found only in six or seven mountain ranges of Ethiopia. This includes the Arssi and Bale mountains of southeast Ethiopia, the Simien mountains, northeast Shoa, Gojjam, and Mt. Guna (Ginsberg and Macdonald 1990). The largest population exists in the Bale Mountains National Park with 120-160 individuals (Sillero-Zubiri and Gottelli 1995). Physical Description Ethiopian wolves are long-limbed, slender looking canids. They have a reddish coat with white marking on the legs, underbelly, tail, face, and chin. The boundary between the red and white fur is quite distinct. White markings on the face include a characteristic white crescent below the eyes and a white spot on the cheeks. The chin and throat are also white. The tail is marked with an indistinct black stripe down its length and a brush of black hairs at the tip. The ears are wide and pointed and the nose, gums, and palate are black. Females are generally paler in color than males and are smaller overall. There are five toes on the front feet and four on the rear feet. Males measure from 928 to 1012 mm (average 963 mm) and females from 841 to 960 mm (average 919 mm). Males weigh from 14.2 to 19.3 kg (average 16.2) and females from 11.2 to 14.2 kg (average 12.8). The tail is from 270 to 396 mm in length. The dental formula is 3/3:1/1:4/4:2/3, with the lower third molar being absent occasionally. (Sillero-Zubiri and Marino, 1995) Reproduction For Ethiopian wolves, dispersal from their native packs is limited due to habitat saturation. Males generally remain in their natal pack, and a small number of females disperse in their second or third year. To combat this high potential for inbreeding inside the closely related pack, matings outside the pack occur frequently. Copulation outside the pack occurs with males of all rank, but those within the pack occur only between the dominant male and female. While copulation between males and subordinate females does occur, pups that may arise from this union rarely survive (Sillero-Zubiri et al. 1996). Prior to copulation, the dominant female increases her rate of scent marking, play soliciting, food begging towards the dominant male, and aggressive behavior towards subordinate females. Ethiopian wolves mate over a period of 3-5 days, involving a copulation tie that lasts up to 15 minutes. It is not uncommon for a subordinate female to assist in suckling the young of the dominant female. In these cases, the subordinate lactating female is likely pregnant and either loses or deserts her own young for those of the dominant female. Once a year between October and January, the dominant female in each pack gives birth to a litter of 2-6 pups. Gestation lasts approximately 60-62 days. The female gives birth to her litter in a den she digs in open ground under a boulder or in a rocky crevice. The pups are born with their eyes closed and no teeth. They are charcoal gray with a buff patch on their chest and under areas. At about 3 weeks, the coat begins to be replaced by the normal adult coloring and the young first emerge from the den. After this time, den sites are regularly shifted, sometimes up to 1300m. Development of the young occurs in three stages (Sillero-Zubiri and Gottelli 1994). The first covers weeks 1-4 when the pups are completely dependent on their mother for milk. The second occurs from week 5-10 from when the pups' milk diet is supplemented by solid food regurgitated from all pack members. It ends when the pups are completely weaned. Finally, from week 10 until about 6 months, the young survive almost solely on solid food provided from adult members of the pack. Adults have been seen providing food for young up to 1 year old. The Ethiopian wolf attains full adult appearance at 2 years of age, and both sexes are sexually mature during their second year (Sillero-Zubiri and Gottelli 1994). Data on life expectancy is inadequate, but C. simensis is likely to live 8-9 years in the wild (Macdonald 1984). Lifespan/Longevity Ethiopian wolves may live 8 to 10 years in the wild, although one wild individual was recorded living to 12 years. (Sillero-Zubiri and Marino, 1995) Range lifespan Status: wild 12 (high) years Typical lifespan Status: wild 10 (high) years Behavior Although it primarily does its hunting alone, C. simensis is a social animal, forming packs of 3-13 individuals (mean 6). Packs congregate for social greetings and border patrols at dawn, midday, and evening, but forage individually during the rest of the day. The Ethiopian wolf is diurnal and sleeps in the open during night, alone or in groups. Pack structure is hierarchical and well defined by dominant and submissive displays as seen with other canids. Each sex has a dominance rank with shifts occurring in males occasionally but not in females. Play-fighting among pups in the first few weeks begins to establish rank between siblings (Sillero-Zubiri and Gottelli 1994). Ethiopian wolf packs are territorial. C. simensis travels in packs to patrol its territory. Packs maintain the boundaries of their territories by scent marking and vocalization. Home ranges of packs are small for a canid of its size. The typical home range is 4-15 square kilometers with an average wolf density of 1/square kilometer. Skirmishes between neighboring packs are frequent. Canis simensis makes several types of vocalization. Alarm calls are emitted at the sight or scent of man, dogs, or unfamiliar wolves. They start with a "huff" and are followed by a series of "yelps" and "barks." Greeting calls consist of "growls" of threat, high-frequency "whines" of submission, and "group yip-howls" given at reunion of pack members. Also, "lone howls" or "group howls" can be heard 5 km away and are used for long distance communication (Sillero-Zubiri and Gottelli 1994). Communication and Perception Food Habits Canis simensis is a carnivore, generally preying on rodents ranging in size from the giant mole-rat Tachyoryctes macrocephalus (900 g) to that of the common grass rats (Arvicanthis blicki, Lophuromys melanonyx; 90-120 g) (Ginsberg and Macdonald 1990). In 689 feces, murid rodents accounted for 95.8% of all prey items, and 86.6% belonged to the three species listed above (Sillero-Zubiri and Gottelli 1994). When present in the hunting range, giant mole-rats are the primary component of the diet. In its absence, the common mole-rat Tachyoryctes splendens is most commonly eaten (Malcom 1997). Canis simensis also eats goslings, eggs, and young ungulates (reedbuck and mountain nyla) and occasionally scavenges carcasses. The Ethiopian wolf often caches its prey in shallow holes (Ginsberg and Macdonald 1990). Prey is usually captured by digging it out of burrows. Areas of high prey density are patrolled by wolves walking slowly. Once prey is located, the wolf moves stealthily towards it and grabs it with its mouth after a short dash. Occasionally, the Ethiopian wolf hunts cooperatively to bring down young antelopes, lambs, and hares (Sillero-Zubiri and Gottelli 1994). Ecosystem Roles Ethiopian wolves are top predators in the ecosystems in which they live. Economic Importance for Humans: Positive Canis simensis helps control populations of rodents in its habitat. Economic Importance for Humans: Negative The Ethiopian wolf occasionally preys on lambs (Sillero-Zubiri 1995). Conservation Status Ethiopian wolves are considered endangered by both the IUCN and U.S. Endangered Species Act. They are protected from hunting under Ethiopian law. Effort to curb the transmission of diseases, especially rabies, to Ethiopian wolves from domestic dogs and to prevent hybridization with domestic dogs have been undertaken. In addition, monitoring of Ethiopian wolf populations continues. (Sillero-Zubiri and Marino, 1995) Other Comments A recent genetic study suggests that the C. simensis is more closely related to gray wolves and coyotes than any other African canid (jackals, foxes, wild dogs). It is hypothesized that C. simensis is an evolutionary remnant of a past invasion of North Africa by gray wolf-life ancestors (Gottelli et al. 1994). Contributors Andrew Bunker (author), University of Michigan-Ann Arbor. Glossary Ethiopian living in sub-Saharan Africa (south of 30 degrees north) and Madagascar. altricial young are born in a relatively underdeveloped state; they are unable to feed or care for themselves or locomote independently for a period of time after birth/hatching. In birds, naked and helpless after hatching. bilateral symmetry having body symmetry such that the animal can be divided in one plane into two mirror-image halves. Animals with bilateral symmetry have dorsal and ventral sides, as well as anterior and posterior ends. Synapomorphy of the Bilateria. carnivore an animal that mainly eats meat carrion flesh of dead animals. chemical uses smells or other chemicals to communicate cooperative breeder helpers provide assistance in raising young that are not their own crepuscular active at dawn and dusk diurnal active during the day, 2. lasting for one day. dominance hierarchies ranking system or pecking order among members of a long-term social group, where dominance status affects access to resources or mates endothermic animals that use metabolically generated heat to regulate body temperature independently of ambient temperature. Endothermy is a synapomorphy of the Mammalia, although it may have arisen in a (now extinct) synapsid ancestor; the fossil record does not distinguish these possibilities. Convergent in birds. monogamous Having one mate at a time. motile having the capacity to move from one place to another. mountains This terrestrial biome includes summits of high mountains, either without vegetation or covered by low, tundra-like vegetation. native range the area in which the animal is naturally found, the region in which it is endemic. nocturnal active during the night sexual reproduction that includes combining the genetic contribution of two individuals, a male and a female social associates with others of its species; forms social groups. solitary lives alone stores or caches food places a food item in a special place to be eaten later. Also called "hoarding" tactile uses touch to communicate temperate that region of the Earth between 23.5 degrees North and 60 degrees North (between the Tropic of Cancer and the Arctic Circle) and between 23.5 degrees South and 60 degrees South (between the Tropic of Capricorn and the Antarctic Circle). terrestrial Living on the ground. territorial defends an area within the home range, occupied by a single animals or group of animals of the same species and held through overt defense, display, or advertisement tropical savanna and grassland A terrestrial biome. Savannas are grasslands with scattered individual trees that do not form a closed canopy. Extensive savannas are found in parts of subtropical and tropical Africa and South America, and in Australia. savanna A grassland with scattered trees or scattered clumps of trees, a type of community intermediate between grassland and forest. See also Tropical savanna and grassland biome. temperate grassland A terrestrial biome found in temperate latitudes (>23.5° N or S latitude). Vegetation is made up mostly of grasses, the height and species diversity of which depend largely on the amount of moisture available. Fire and grazing are important in the long-term maintenance of grasslands. Disclaimer: The Animal Diversity Web is an educational resource written largely by and for college students. ADW doesn't cover all species in the world, nor does it include all the latest scientific information about organisms we describe. Though we edit our accounts for accuracy, we cannot guarantee all information in those accounts. While ADW staff and contributors provide references to books and websites that we believe are reputable, we cannot necessarily endorse the contents of references beyond our control. This material is based upon work supported by the National Science Foundation Grants DRL 0089283, DRL 0628151, DUE 0633095, DRL 0918590, and DUE 1122742. Additional support has come from the Marisla Foundation, UM College of Literature, Science, and the Arts, Museum of Zoology, and Information and Technology Services.
Geographic Range The Ethiopian wolf has a very restricted range. It is found only in six or seven mountain ranges of Ethiopia. This includes the Arssi and Bale mountains of southeast Ethiopia, the Simien mountains, northeast Shoa, Gojjam, and Mt. Guna (Ginsberg and Macdonald 1990). The largest population exists in the Bale Mountains National Park with 120-160 individuals (Sillero-Zubiri and Gottelli 1995). Physical Description Ethiopian wolves are long-limbed, slender looking canids. They have a reddish coat with white marking on the legs, underbelly, tail, face, and chin. The boundary between the red and white fur is quite distinct. White markings on the face include a characteristic white crescent below the eyes and a white spot on the cheeks. The chin and throat are also white. The tail is marked with an indistinct black stripe down its length and a brush of black hairs at the tip. The ears are wide and pointed and the nose, gums, and palate are black. Females are generally paler in color than males and are smaller overall. There are five toes on the front feet and four on the rear feet. Males measure from 928 to 1012 mm (average 963 mm) and females from 841 to 960 mm (average 919 mm). Males weigh from 14.2 to 19.3 kg (average 16.2) and females from 11.2 to 14.2 kg (average 12.8). The tail is from 270 to 396 mm in length. The dental formula is 3/3:1/1:4/4:2/3, with the lower third molar being absent occasionally. (Sillero-Zubiri and Marino, 1995) Reproduction For Ethiopian wolves, dispersal from their native packs is limited due to habitat saturation.
yes
Zoogeography
Are wolves native to Africa?
yes_statement
"wolves" are "native" to africa.. africa is home to "wolves".
https://www.wolfworlds.com/wolf-habitat/
Wolf Habitat - Wolf Facts and Information
Wolf Habitat Wolf Habitat and Distribution Wolves are the wild dogs of the world, and they have a vast distribution that exists in many types of habitats. They are very diverse animals. For this reason, they have a habitat that is very spread out around the world. It isn’t true that they only live in very thick forests and come out at night. Wolves have been identified in many areas that you may not even imagine them being able to survive. In the wild, wolves are seen to thrive in forested areas and grasslands but also exist in steppes, tundra, boreal forests, and deserts. Their extreme adaptability is surprising to many because many wild dogs usually favor one type of habitat. Their versatility is amazing and it has helped them to survive despite their status as an endangered animal. Most wolves are classified according to where they live and the type of vegetation that surrounds them. The coats, habitats, and classifications are all linked and habitat. Wolf Classification To classify unique species of wolves requires extensive knowledge about the wolf species and their behavior, as well as a thorough understanding of how they develop. There are many different species of wolf, each unique in appearance and mannerism. Many are considered hybrids of the gray wolf, the common ancestor of all wolves. Here are some categories of wolves that people often find to be controversial: African Wolf The African wolf is a medium-sized canid with golden to ginger-colored fur, lightly built, and relatively long legs and ears. Its coat is generally a tawny yellow to the buff color brown. It carries a characteristic black mark on its forelegs and chest, with a fainter one on its shoulders. The ears are relatively large and pointed. Gray Wolf A gray wolf is a canine with long bushy tails that are often black-tipped. Its coat color is typically a mix of gray and brown with buffy facial markings that extend down to the lower abdomen but the color can vary from solid white to brown or black. Red Wolf Nocturnal and territorial, the red wolf is a master hunter, capable of taking down prey three times its size. It has a mottled gray coat with long legs and black-tipped ears. Indian Plain wolf The Indian wolf is one of the largest subspecies of the grey wolf, comparable to the Arabian wolf and Himalayan wolf. Its fur is comparatively short and sleek, with colors ranging from almost white to dark grey or black. It occurs in a wide range of habitats across India and Nepal, although it generally avoids densely populated areas. Arabian Wolf The Arabian wolf is the smallest wolf subspecies and a desert-adapted subspecies that normally lives in small groups. It is omnivorous, eating small to medium-sized prey. With a genetic similarity to the Ethiopian wolf, it is thought that they are ancestors of each other. Polar Wolf (Arctic Wolf) The Arctic or polar wolf, also known as the white wolf or polar wolf, is a subspecies of grey wolf native to Canada’s Queen Elizabeth Islands. Its habitat is located around the Arctic Circle. It is larger than mainland gray wolves and is mostly covered with fur and few spots. Eurasian Wolf The Eurasian wolf is the largest of all grey wolf subspecies. It has a large range throughout continental Eurasia and currently exists in the wild in Eastern Europe, Middle Asia (excluding China), Central Asia, and the Himalayas. It has also been introduced to Northern America, Italy, and Japan. Wild Eurasian wolves are now rare in Western Europe. However, they can be found in the Balkans, France, Germany, and around the borders of Russia. Where can you find wolves? Like any predator, wolves tend to be found in places of an abundant supply of prey. This is why we find them in areas inhabited by deer, caribou, elk, and other herbivores. Some wolves species only live in the United States in forests and other areas where animals are plentiful to consume due to their dietary needs. Others live in the cold Arctic regions where there are hardly any other animals surviving there due to the bitter cold. There are wolves found in the mountain ranges of Colorado thanks to some reintroduction programs along the Rockies that have been very successful. Regardless of the location, these animals need to have room to roam around. Their home range can be from 33 to 6,200 km2. It will depend on the type of wolf and where they happen to reside. Also, research has found evidence of wolves living all along the Northern Hemisphere even though they do not have large numbers of wolves. They can be found along the plains, in the savannah deserts of Africa, and in forests that have both hardwood and softwood. As long as their basic needs are met, they can survive. Wolves are also able to adapt and propel into new territory when necessary for their survival. An arctic wolf in the snow How wolves live in the Alaskan tundra Most of the wolves left in the world today are found living on the frozen tundra of Alaska and Canada. Here they can live in remote areas and not be bothered like they are in other places where humans are more likely to settle. Even so, it doesn’t mean they aren’t in jeopardy due to a lack of food. Hunters go to those areas as well in the hopes of being able to successfully kill wolves. Wolves spend about 8 or 10 hours every day moving through their home range. They will rarely stay in one place for too long of a period. They mark their habitat with urine as well as a scent that comes from glands in their tails. These markers are to let other wolves know that such territory has already been claimed. It is not unprecedented for the habitat of a pack of wolves to overlap with that of other wolf packs. Generally, this is very peaceful since the different wolf packs avoid each other. However, when the size of the habitat is reduced and when food is hard to find, they can become more aggressive towards each other. The leading reason why wolves out there today continue to have a hard time surviving comes down to the fact that their habitat is being destroyed. People continue to want more land to place their homes on or their ranches. Businesses continue to tear down the areas that these animals inhabit. Without a vast habitat for them to live in, they struggle to find enough survival food. That is why they seem to be attacking more domesticated animals. They need a source of food and when that is placed in front of them, they aren’t able to differentiate between that and what nature offers them. Wolves have a bad reputation for being destructive but when you view the whole picture you will see that humans are the ones responsible for taking away their habitat. Where do wolves roam in Africa? Wolves have long been associated with Africa’s stories and pictures about wild animals. But the truth about where they actually live on the continent is surprising. In some countries, such as Ethiopia, they are often viewed as dangerous animals since livestock can be an easy target for wolves. However, they are also protected due to being a part of the country’s natural heritage. The Serengeti wolf is a subspecies of gray wolves. It is native to Africa and primarily found in the Serengeti region of Tanzania. While the Serengeti wolf is considered endangered, recent conservation efforts have helped maintain its population, estimated at 1,500 to 2,000 individuals. Other places you can find African wolves are: in the tall grass savannas of Botswana, Ethiopia, Kenya, Mozambique, Namibia, Rwanda, South Africa, and Sudan. Wolf footprints in Europe and Asia The wolf has a long history in Europe and Asia. In the early 20th century, wolves were seen as predators that needed to be eradicated from these regions. It was thought that they were going to completely wipe out the entire population of reindeer. However, the wolf populations were not entirely exterminated from western Europe and Asia in the mid-to-late-20th century. Many wolf populations continued to exist in wilderness areas, away from human influence. As a consequence of the increasing human population and economic growth, these areas have become surrounded by farms and cities. Human persecution of wolves is now considered one of the direct threats to wolf populations. Europe and Asia today represent a vast region with many different kinds of wolves. Wolves present across the region, however, can be divided into two types. Gray wolves and red wolves. Gray wolves (Canis lupus) are the most common wolf species in Europe and Asia today. These wolves are easily recognized by their large size, shaggy pelt, and bushy tail. They generally live in packs but can sometimes be found living in pairs as well. Deforestation and Wolves Habitat In several important respects, wolves are dependent upon the integrity of their habitat. The management of prey in places outside the species’ core range is also heavily influenced by habitat and ecosystem health. Deforestation has had a substantial effect on the way of life of the wolves. It interferes with the prey, habitat, and even the ecosystem balance. As a result, they have less food to hunt. Which increases tensions and leads to wolf attacks on humans. Diversity is Good, Even for Wolves Over the past half-century, wolves have undergone a dramatic recovery in North America. Back from the brink of extinction, their populations are now flourishing in many regions. Despite the threat to wolves’ habitat and existence, their diversified species have allowed them to survive for a long time. This characteristic has seen them withstand a lot of ecological and human stress. Although many conservation processes are going on, humans have to take a firm decision to protect the existence and habitat of these creatures to avoid extinction.
They need a source of food and when that is placed in front of them, they aren’t able to differentiate between that and what nature offers them. Wolves have a bad reputation for being destructive but when you view the whole picture you will see that humans are the ones responsible for taking away their habitat. Where do wolves roam in Africa? Wolves have long been associated with Africa’s stories and pictures about wild animals. But the truth about where they actually live on the continent is surprising. In some countries, such as Ethiopia, they are often viewed as dangerous animals since livestock can be an easy target for wolves. However, they are also protected due to being a part of the country’s natural heritage. The Serengeti wolf is a subspecies of gray wolves. It is native to Africa and primarily found in the Serengeti region of Tanzania. While the Serengeti wolf is considered endangered, recent conservation efforts have helped maintain its population, estimated at 1,500 to 2,000 individuals. Other places you can find African wolves are: in the tall grass savannas of Botswana, Ethiopia, Kenya, Mozambique, Namibia, Rwanda, South Africa, and Sudan. Wolf footprints in Europe and Asia The wolf has a long history in Europe and Asia. In the early 20th century, wolves were seen as predators that needed to be eradicated from these regions. It was thought that they were going to completely wipe out the entire population of reindeer. However, the wolf populations were not entirely exterminated from western Europe and Asia in the mid-to-late-20th century. Many wolf populations continued to exist in wilderness areas, away from human influence. As a consequence of the increasing human population and economic growth, these areas have become surrounded by farms and cities. Human persecution of wolves is now considered one of the direct threats to wolf populations. Europe and Asia today represent a vast region with many different kinds of wolves. Wolves present across the region, however, can be divided into two types. Gray wolves and red wolves.
yes
Zoogeography
Are wolves native to Africa?
yes_statement
"wolves" are "native" to africa.. africa is home to "wolves".
https://animalia.bio/african-golden-wolf
African Golden Wolf - Facts, Diet, Habitat & Pictures on Animalia.bio
The African golden wolf (Canis lupaster) is a canine that plays a prominent role in some African cultures. It was previously classified as an African variant of the Golden jackal. In 2015, a series of analyses on the species' mitochondrial DNA and nuclear genome demonstrated that it was, in fact, distinct from the Golden jackal, and more closely related to the Gray wolf and the coyote. Appearance The African golden wolf has a relatively long snout and ears, and a comparatively short tail. Fur color varies individually, seasonally, and geographically, though the typical coloration is yellowish to silvery grey, with slightly reddish limbs and black speckling on the tail and shoulders. The throat, abdomen, and facial markings are usually white, and the eyes are amber-colored. Females bear two to four pairs of teats. Although superficially similar to the Golden jackal (particularly in East Africa), the African golden wolf has a more pointed muzzle and sharper, more robust teeth. The ears are longer in the African wolf, and the skull has a more elevated forehead. The African golden wolf is commonly found in Africa in the northeast and northwest, in the east from Senegal to Egypt, throughout Libya, Algeria, and Morocco in the north, and in the south to Chad, Nigeria, and Tanzania. African golden wolves are adapted to live in different habitats; in Algeria, they occur in the Mediterranean, coastal and hilly areas (including hedged farmlands, scrublands, pinewoods, and oak forests), while populations in Senegal inhabit tropical, semi-arid climate zones including Sahelian savannahs. Populations in Mali have been documented in arid Sahelian massifs. In Egypt, the African these animals inhabit agricultural areas, wastelands, desert margins, rocky areas, and cliffs. At Lake Nasser, they live close to the lakeshore. Biome Climate zones Habits and Lifestyle The African golden wolf's social organization is very flexible and differs according to the food that is available. The breeding pair is the basic unit, along with its current offspring, and perhaps members of previous litters staying on as "helpers". Big groups are rare, only observed in areas where there is much human waste. Relationships within African golden wolf families are comparatively peaceful. The wolves will lie with each other and groom each other. They are more active in the daytime. These animals are very territorial, with the pair patrolling and marking their territory in tandem. Both of the partners, as well as their helpers, will behave aggressively towards intruders, in particular those of the same sex. Partners do not help each other repel intruders which are the opposite sex. African golden wolves frequently groom one another, particularly during courtship, which can last up to 30 minutes. When greeting they nibble the face and neck of one another. When fighting, African golden wolves slam their opponents with their hips and bite and shake the shoulder. Their vocalizations are similar to those of the domestic dog, with seven sounds having been recorded, including howls, barks, growls, whines, and cackles. Subspecies can be recognized by differences in their howls. One of the most commonly heard sounds is a high, keening wail, of which there are three varieties; a long single-toned continuous howl, a wail that rises and falls, and a series of short, staccato howls. These howls are used to repel intruders and attract family members. African golden wolves also howl in chorus; it is thought that they do so to reinforce family bonds and establish territorial status. Diet and Nutrition African golden wolves are carnivores and scavengers. They eat small prey, including hares, rats, grass cutters, ground squirrels, snakes, lizards and ground-nesting birds, francolins, and bustards. They also eat many insects, including dung beetles, termites, larvae, and grasshoppers. They will also hunt young gazelles, warthogs, and duikers, and eat animal carcasses, fruit, and human refuse. Mating Habits African wolves are monogamous. Their courtship rituals are extremely long, during which the pair stay almost constantly together. Before mating, they patrol and mark their territory with scent. After the gestation period of about 63 days, the female gives birth to a litter of 1 to 9 pups. In the Serengeti (in Eastern Africa), pups are born in December-January. They begin to eat solid food after one month. Weaning starts at the age of 2 months and ends at 4 months. By then the young can venture up to 50 m out from the den, being semi-independent, sometimes sleeping in the open. The mother feeds her pups more often than the father or helpers do. The playing behavior of the pups becomes increasingly more aggressive, as they compete for rank, this being established after 6 months. Population Population threats The main threat to the African golden wolf is the loss of its habitat. As the human population grows, it results in the expansion of roads, settlements, and agriculture, which threatens this species. Losing their habitat, African golden wolves invade human settlements, where people consider them a danger to livestock and poultry, and kill them as pests. Population number The IUCN Red List and other sources don’t provide the number of the African golden wolf total population size. Currently, this species is classified as Least Concern (LC) on the IUCN Red List but its numbers today are decreasing. Ecological niche As African golden wolves consume garbage and animals’ carcasses, they play a very important role in the ecosystem as scavengers. They also control increases in rodents and insect numbers, consuming them as prey items. Fun Facts for Kids African golden wolves often carry away more food than they can consume and cache the surplus, which is generally recovered within 24 hours. African golden wolves can catch grasshoppers and flying termites either in mid-air or by pouncing on them while they are on the ground. African golden wolves are fiercely intolerant of other scavengers. They dominate vultures on kills and one wolf can hold dozens of vultures at bay by threatening, snapping, and lunging at them. African golden wolves often feed alongside Spotted hyenas, though they will be chased if they approach too closely. Spotted hyenas in turn sometimes follow wolves during the gazelle fawning season, as wolves are effective at tracking and catching young animals. According to Arab Egyptian folklore the African golden wolf can cause chickens to faint from fear by simply passing underneath their roosts.
The African golden wolf is commonly found in Africa in the northeast and northwest, in the east from Senegal to Egypt, throughout Libya, Algeria, and Morocco in the north, and in the south to Chad, Nigeria, and Tanzania. African golden wolves are adapted to live in different habitats; in Algeria, they occur in the Mediterranean, coastal and hilly areas (including hedged farmlands, scrublands, pinewoods, and oak forests), while populations in Senegal inhabit tropical, semi-arid climate zones including Sahelian savannahs. Populations in Mali have been documented in arid Sahelian massifs. In Egypt, the African these animals inhabit agricultural areas, wastelands, desert margins, rocky areas, and cliffs. At Lake Nasser, they live close to the lakeshore. Biome Climate zones Habits and Lifestyle The African golden wolf's social organization is very flexible and differs according to the food that is available. The breeding pair is the basic unit, along with its current offspring, and perhaps members of previous litters staying on as "helpers". Big groups are rare, only observed in areas where there is much human waste. Relationships within African golden wolf families are comparatively peaceful. The wolves will lie with each other and groom each other. They are more active in the daytime. These animals are very territorial, with the pair patrolling and marking their territory in tandem. Both of the partners, as well as their helpers, will behave aggressively towards intruders, in particular those of the same sex. Partners do not help each other repel intruders which are the opposite sex. African golden wolves frequently groom one another, particularly during courtship, which can last up to 30 minutes. When greeting they nibble the face and neck of one another. When fighting, African golden wolves slam their opponents with their hips and bite and shake the shoulder. Their vocalizations are similar to those of the domestic dog, with seven sounds having been recorded, including howls, barks, growls, whines, and cackles.
yes
Zoogeography
Are wolves native to Africa?
no_statement
"wolves" are not "native" to africa.. africa does not have "native" "wolves".
https://www.nbcnews.com/id/wbna32275502
Dog domestication likely started in N. Africa
Dog domestication likely started in N. Africa Modern humans originated in Africa, and now it looks like man's best friend first emerged there too. An extensive genetic study points to a Eurasian — possibly North African — origin for the domestication of dogs. A Basenji is a dog breed indigenous to sub-Saharan Africa. Humans might have first domesticated dogs from in Africa, with Egypt being one possibility, since wolves are native to that region. iStockPhoto Aug. 3, 2009, 11:03 PM UTC / Source: Discovery Channel By By Jennifer Viegas Modern humans originated in Africa, and now it looks like man's best friend first emerged there too. An extensive genetic study on the ancestry of African village dogs points to a Eurasian — possibly North African — origin for the domestication of dogs. Prior research concluded that dogs likely originated in East Asia. However, this latest study, the most thorough investigation ever on the ancestry of African village dogs, indicates otherwise. "Village" dogs are local, semi-feral dogs that cluster around human settlements in much of the world. "I think our results cast some doubt on the hypothesis of an East Asian origin for dog domestication that was put forward based on previous mitochondrial DNA genetic research," lead author Adam Boyko told Discovery News. Boyko, a research associate in the Department of Biological Statistics and Computational Biology at Cornell University, and his colleagues looked at three genetic markers for 318 village dogs from seven regions in Egypt, Uganda and Namibia. The scientists performed the same DNA analysis on a number of putatively African dog breeds, as well as on Puerto Rican street dogs and mixed breed dogs from the United States. The scientists determined genetic diversity was just as high for the African dogs as it was for the East Asian village dogs that were the focus of the earlier research. "Species tend to show the highest genetic diversity near their place of origin," said Boyko. He explained that this is because the species have "been there longer and therefore have had more time to accumulate diversity, and because as a species expands its range by colonizing a new region, it usually does so with a relatively small band of individuals carrying just a subset of the genetic diversity found in the ancestral population." Humans might have then first domesticated dogs from wolves in Africa, with Egypt being one possibility, since wolves are native to that region. Many existing wild species of canid, such as the Egyptian jackal, popularly featured in ancient Egyptian art, are now critically endangered. The new study, published in the latest issue of the Proceedings of the National Academy of Sciences, also found that some so-called "African" dog breeds are not really native to Africa. These include Pharaoh hounds and Rhodesian ridgebacks, which turned out to not have much indigenous African ancestry. On the other hand, "Basenjis are clearly an indigenous sub-Saharan breed, and Afghan hounds and Salukis appear to be indigenous to North Africa or the Middle East," Boyko said. The pattern seems to be that if a region was colonized or otherwise settled by Europeans, dogs of that area now tend to be less indigenous. Dogs in central Namibia, for example, "looked nearly identical genetically to dogs you would find on the streets of Puerto Rico or in animal shelters in the U.S., a pretty clear indication that these are mixes of various modern breeds." Robert Wayne, an expert on wolves and dog domestication and a professor in the Department of Ecology and Evolutionary Biology at UCLA, told Discovery News that he supports the new findings. "It's clear dogs did not originate in sub-Saharan Africa, since wolves are not native to that area," asserts Wayne. However, he agrees that Eurasia is the more likely overall place where dogs were first domesticated, with Egypt being a possibility. Both Wayne and Boyko hope future genetic research on canines will continue to shed light on the origins of indigenous dog populations to better confirm and pinpoint exactly where the domestication of dogs first happened.
"I think our results cast some doubt on the hypothesis of an East Asian origin for dog domestication that was put forward based on previous mitochondrial DNA genetic research," lead author Adam Boyko told Discovery News. Boyko, a research associate in the Department of Biological Statistics and Computational Biology at Cornell University, and his colleagues looked at three genetic markers for 318 village dogs from seven regions in Egypt, Uganda and Namibia. The scientists performed the same DNA analysis on a number of putatively African dog breeds, as well as on Puerto Rican street dogs and mixed breed dogs from the United States. The scientists determined genetic diversity was just as high for the African dogs as it was for the East Asian village dogs that were the focus of the earlier research. "Species tend to show the highest genetic diversity near their place of origin," said Boyko. He explained that this is because the species have "been there longer and therefore have had more time to accumulate diversity, and because as a species expands its range by colonizing a new region, it usually does so with a relatively small band of individuals carrying just a subset of the genetic diversity found in the ancestral population. " Humans might have then first domesticated dogs from wolves in Africa, with Egypt being one possibility, since wolves are native to that region. Many existing wild species of canid, such as the Egyptian jackal, popularly featured in ancient Egyptian art, are now critically endangered. The new study, published in the latest issue of the Proceedings of the National Academy of Sciences, also found that some so-called "African" dog breeds are not really native to Africa. These include Pharaoh hounds and Rhodesian ridgebacks, which turned out to not have much indigenous African ancestry. On the other hand, "Basenjis are clearly an indigenous sub-Saharan breed, and Afghan hounds and Salukis appear to be indigenous to North Africa or the Middle East," Boyko said. The pattern seems to be that if a region was colonized or otherwise settled by Europeans, dogs of that area now tend to be less indigenous.
yes
Zoogeography
Are wolves native to Africa?
no_statement
"wolves" are not "native" to africa.. africa does not have "native" "wolves".
https://www.tripadvisor.com/ShowUserReviews-g312558-d7369319-r704329337-Garden_Route_Wolf_Sanctuary-Plettenberg_Bay_Western_Cape.html
Wolves in South Africa??? - Review of Garden Route Wolf ...
Wolves in South Africa??? No, wolves are not native to South Africa, but that hasn't stopped people from bringing them in and then abandoning them. Same with wolf dogs. Obviously, these guys should never be turned loose into the South African wild since they're not native, so it's wonderful that this sanctuary exists. They do a marvelous job and the wolves, in three packs, have enough room to run and play. A worthy place to visit and to support. Date of experience: August 2019 Ask jmaison about Garden Route Wolf Sanctuary Thank jmaison This review is the subjective opinion of a Tripadvisor member and not of Tripadvisor LLC. Tripadvisor performs checks on reviews as part of our industry-leading trust & safety standards. Read our transparency report to learn more. Disappointed. Expected much more, especially because of the reviews. No guide provided. Roaming around aimlessly. No one to ask questions. No one available to even point us in any direction. Wolves are not endemic to Africa. So, not really an authentic experience. But the animal farm is fun petting expedience for the kids. Like that. But the place should just be marketed as that. Date of experience: August 2019 Ask Ludmila Y about Garden Route Wolf Sanctuary Thank Ludmila Y This review is the subjective opinion of a Tripadvisor member and not of Tripadvisor LLC. Tripadvisor performs checks on reviews as part of our industry-leading trust & safety standards. Read our transparency report to learn more. We were pleasantly surprised by this sanctuary. It is a must see for any animal lover. The guides are totally dedicated to the wolfs, wolf dogs and huskies. You can see the mutual love and trust in each and every one, both human and animal. The animals look happy and you can see they are well taken care off. There is lots more to see, but the wolves are the absolute high light!! Loved, loved it! Will recommend it to old and young! Pay the bit extra for the guided tour, it is very informative and you get to go into the enclosures. Date of experience: August 2019 Ask Jamien S about Garden Route Wolf Sanctuary Thank Jamien S This review is the subjective opinion of a Tripadvisor member and not of Tripadvisor LLC. Tripadvisor performs checks on reviews as part of our industry-leading trust & safety standards. Read our transparency report to learn more. You must visit this sanctuary if you have interest in animals and pets. We had the guided tour and we realised that there is a huge difference between dogs and wolves. There is also a touch farm with farm animals, alwways a treat to get close to animals. Date of experience: July 2019 Ask willems170 about Garden Route Wolf Sanctuary Thank willems170 This review is the subjective opinion of a Tripadvisor member and not of Tripadvisor LLC. Tripadvisor performs checks on reviews as part of our industry-leading trust & safety standards. Read our transparency report to learn more. We last went to the Wolf sanctuary about 14 years ago but we remember not being too impressed then ! Our 10 year old has been begging us to take her as she loves wolves ! So after reading the excellent reviews we decided to go even though we prefer seeing animals in the wild ! Our 10 year old was too young for the guided tour but we hung around the guided tour and watched them go into some of the wolf areas! The wolves definitely seem to be love the guides and looked happy ! The fenced areas seem 100 percent better than before and the animals all look well and happy ! We also loved the farm yard animals especially the piglets and baby rabbits ! The Llamas were also very entertaining! My daughter is now going to adopt a wolf which she is so excited about and I can relax about knowing that the wolf is well taken care of at the Sanctuary and we can visit when we are there again! Date of experience: July 2019 Ask mightymousemdsm about Garden Route Wolf Sanctuary Thank mightymousemdsm This review is the subjective opinion of a Tripadvisor member and not of Tripadvisor LLC. Tripadvisor performs checks on reviews as part of our industry-leading trust & safety standards. Read our transparency report to learn more. Lovely experience seeing the wolves but there is also a farmyard with lots of lovely animals you can interact with including pigs, cows, goats, alpacas, monkeys, bunnies and many more! A lot of baby animals roaming around and you can pick up the bunnies, really wonderful experience! Date of experience: June 2019 Ask Lsharp88 about Garden Route Wolf Sanctuary Thank Lsharp88 This review is the subjective opinion of a Tripadvisor member and not of Tripadvisor LLC. Tripadvisor performs checks on reviews as part of our industry-leading trust & safety standards. Read our transparency report to learn more. This is the version of our website addressed to speakers of English in the United States. If you are a resident of another country or region, please select the appropriate version of Tripadvisor for your country or region in the drop-down menu.
Wolves in South Africa??? No, wolves are not native to South Africa, but that hasn't stopped people from bringing them in and then abandoning them. Same with wolf dogs. Obviously, these guys should never be turned loose into the South African wild since they're not native, so it's wonderful that this sanctuary exists. They do a marvelous job and the wolves, in three packs, have enough room to run and play. A worthy place to visit and to support. Date of experience: August 2019 Ask jmaison about Garden Route Wolf Sanctuary Thank jmaison This review is the subjective opinion of a Tripadvisor member and not of Tripadvisor LLC. Tripadvisor performs checks on reviews as part of our industry-leading trust & safety standards. Read our transparency report to learn more. Disappointed. Expected much more, especially because of the reviews. No guide provided. Roaming around aimlessly. No one to ask questions. No one available to even point us in any direction. Wolves are not endemic to Africa. So, not really an authentic experience. But the animal farm is fun petting expedience for the kids. Like that. But the place should just be marketed as that. Date of experience: August 2019 Ask Ludmila Y about Garden Route Wolf Sanctuary Thank Ludmila Y This review is the subjective opinion of a Tripadvisor member and not of Tripadvisor LLC. Tripadvisor performs checks on reviews as part of our industry-leading trust & safety standards. Read our transparency report to learn more. We were pleasantly surprised by this sanctuary. It is a must see for any animal lover. The guides are totally dedicated to the wolfs, wolf dogs and huskies. You can see the mutual love and trust in each and every one, both human and animal. The animals look happy and you can see they are well taken care off. There is lots more to see, but the wolves are the absolute high light!! Loved, loved it! Will recommend it to old and young! Pay the bit extra for the guided tour, it is very informative and you get to go into the enclosures.
no
Zoogeography
Are wolves native to Africa?
no_statement
"wolves" are not "native" to africa.. africa does not have "native" "wolves".
https://www.awf.org/blog/saving-critically-endangered-ethiopian-wolf-extinction
Saving the critically endangered Ethiopian wolf from extinction ...
Saving the critically endangered Ethiopian wolf from extinction Saving the critically endangered Ethiopian wolf from extinction About the Author Jacqueline Conciatore is African Wildlife Foundation's Writer & Editorial Manager. She oversees the development of articles and other content for AWF print marketing products such as the annual report. She is passionate about using storytelling and compelling content to convey the value and s ... More To make the greatest conservation impact, AWF uses a range of strategies to protect species in priority landscapes. Though our work is organized around iconic wildlife such as elephants, rhinos, and large carnivores, we design our programs to benefit local human communities as well as all indigenous wildlife and habitats. Among the key species we focus on is one of the world’s rarest canids, the Ethiopian wolf. Also known as the Simien fox or Simien jackal, this highland wolf numbers no more than 440, and perhaps as few as 360, making it Africa’s most endangered carnivore. Although scientists debate which canids are wolf species versus subspecies, the traditional view is that there are three wolf species in the world — the Ethiopian, red, and grey wolf. The Ethiopian wolf is the only wolf species native to Africa and is found in only seven Ethiopian mountain ranges, with the largest populations in the Bale Mountains and the second largest in the Simien Mountains. Endemic and endangered With a somewhat regal bearing, the Ethiopian wolf is the size of a coyote and looks like a red fox, sporting a tawny orange or reddish coat, white throat patch, and bushy tail. It has a narrow muzzle, long legs, and pointed ears. Although shy around humans, it is social with other wolves, living in packs that typically include extended family members male and female. All pack members help with raising and protecting pups. Wolf mothers give birth in dens dug under boulders, inside crevices or in other protected spots. These dens can have multiple entrances and a network of tunnels, and the adults regularly shift pups from one den to another. For food, the Ethiopian wolf depends on high-altitude rodents, especially the big-headed mole-rat, which tunnels to foraging spots but feeds above ground. The Ethiopian wolf is a loner when hunting, but even here it may rely on others for help. Scientists have noted that Ethiopian wolves forage right in the middle of gelada herds, large groups of primates of also known as “bleeding heart monkeys”. The wolves do not prey upon the geladas’ young, and the geladas do not flee from the wolves like they do from feral dogs. Researchers have found the wolves capture rodents at twice the rate when hunting in a gelada group. It is not clear why they have greater success; perhaps the geladas flush rodents out of their burrows by disturbing vegetation. Or, it could be the wolves blend in with the scattered geladas, and the rodents simply do not notice them. Community-led initiatives safeguard the Ethiopian wolf With numbers so small, Ethiopian wolves are highly vulnerable to disease outbreaks, and in the past few years, they have experienced devastating rabies and distemper outbreaks. AWF supports the Ethiopian Wolf Conservation Programme, which administers rabies and distemper vaccines to the wolves, but also to area domestic dogs, who can carry rabies and pose a significant disease threat if not vaccinated. To date, the program has vaccinated tens of thousands of dogs. In partnership with the Ethiopian government, the Ethiopian Wolf Conservation Programme also recruits local community members to act as Wolf Monitors and Wolf Ambassadors who track wolf populations and share conservation messages in communities. The monitors are very dedicated and work through all kinds of conditions to follow the wolf packs and keep up with their status and life events. This work is critical to ensuring a rapid response in the case of disease outbreaks. AWF’s work in Ethiopia incorporates our Classroom Africa program. In exchange for a conservation commitment from the Adisge community near Simien Mountains National Park, Classroom Africa rebuilt the community’s badly under-resourced school. The new Adisge Primary School opened its doors in 2017. For the first time, the school has enough space to enroll 7th and 8th graders. The re-design has made the Adisge school eco-friendly and comfortable, and the site includes new teacher housing as well. Classroom Africa fosters a conservation ethic among young people through eco-clubs and field trips to national parks and other protected areas. The goal is to develop a new generation of local conservation leaders who will be passionate about protecting wildlife. AWF also has invested in high-end eco-lodges in the Simiens and Bale Mountains parks that help create jobs from nature-based tourism. In addition, we support the Ethiopian Wildlife Conservation Authority, which manages both parks, to improve park infrastructure, management and strategically promote tourism.
Although scientists debate which canids are wolf species versus subspecies, the traditional view is that there are three wolf species in the world — the Ethiopian, red, and grey wolf. The Ethiopian wolf is the only wolf species native to Africa and is found in only seven Ethiopian mountain ranges, with the largest populations in the Bale Mountains and the second largest in the Simien Mountains. Endemic and endangered With a somewhat regal bearing, the Ethiopian wolf is the size of a coyote and looks like a red fox, sporting a tawny orange or reddish coat, white throat patch, and bushy tail. It has a narrow muzzle, long legs, and pointed ears. Although shy around humans, it is social with other wolves, living in packs that typically include extended family members male and female. All pack members help with raising and protecting pups. Wolf mothers give birth in dens dug under boulders, inside crevices or in other protected spots. These dens can have multiple entrances and a network of tunnels, and the adults regularly shift pups from one den to another. For food, the Ethiopian wolf depends on high-altitude rodents, especially the big-headed mole-rat, which tunnels to foraging spots but feeds above ground. The Ethiopian wolf is a loner when hunting, but even here it may rely on others for help. Scientists have noted that Ethiopian wolves forage right in the middle of gelada herds, large groups of primates of also known as “bleeding heart monkeys”. The wolves do not prey upon the geladas’ young, and the geladas do not flee from the wolves like they do from feral dogs. Researchers have found the wolves capture rodents at twice the rate when hunting in a gelada group. It is not clear why they have greater success; perhaps the geladas flush rodents out of their burrows by disturbing vegetation. Or, it could be the wolves blend in with the scattered geladas, and the rodents simply do not notice them.
yes
Zoogeography
Are wolves native to Africa?
yes_statement
"wolves" are "native" to africa.. africa is home to "wolves".
https://wolfsa.org.za/
Visit the Tsitsikamma Wolf Sanctuary - Things to Do in the Eastern ...
GUARDIANS OF THE WOLVES learn all about wolves Families love meeting and interacting with our wolves. Bring a picnic basket and the whole family for an interactive and exciting outing. Join Our Wolf Pack Our volunteer program provides a unique opportunity to learn from our team as to how best to provide for the wolves daily and how to ensure their safety. Meet Our Wolf Pack Opened in 2001 in the beautiful Tsitsikamma area of South Africa, The Tsitsikamma Wolf Sanctuary (TTWS) provides a home and safe haven for unwanted and abused wolves and wolf dogs. A non-profit organisation, and the first to open in the country, our sanctuary aims to create awareness by informing and educating visitors through a once-in-a-lifetime encounter. Escape to the great outdoors, spend the day soaking up the impressive surroundings and, guided by our knowledgeable and passionate staff, get to meet these magnificent animals in surroundings What people had to say What they do here is amazing for creatures who aren't native to South Africa. The wolves are clearly well taken care of, and they don't breed any animals. I was very impressed with the tour and the stories behind each pack. Merilyn Prinsloo It was good to see all wolves and to be near them and learn about them George Wyatt Awesome work they are doing as a sanctuary for wolves! Eleanore Eades An experience like no other. I so loved learning about the wolves and getting to interact with them so closely. Thank you Robin and team for your dedication, passion, and knowledge. Megan Kelly Botha Our two little animal lovers, ages 5 and 8 loved it. Andrew was knowledgeable and a great guide. James Wyatt About Us The Tsitsikamma Wolf Sanctuary is a non-profit organisation in the Eastern Cape of South Africa providing a safe haven for abused and abandoned wolves.
GUARDIANS OF THE WOLVES learn all about wolves Families love meeting and interacting with our wolves. Bring a picnic basket and the whole family for an interactive and exciting outing. Join Our Wolf Pack Our volunteer program provides a unique opportunity to learn from our team as to how best to provide for the wolves daily and how to ensure their safety. Meet Our Wolf Pack Opened in 2001 in the beautiful Tsitsikamma area of South Africa, The Tsitsikamma Wolf Sanctuary (TTWS) provides a home and safe haven for unwanted and abused wolves and wolf dogs. A non-profit organisation, and the first to open in the country, our sanctuary aims to create awareness by informing and educating visitors through a once-in-a-lifetime encounter. Escape to the great outdoors, spend the day soaking up the impressive surroundings and, guided by our knowledgeable and passionate staff, get to meet these magnificent animals in surroundings What people had to say What they do here is amazing for creatures who aren't native to South Africa. The wolves are clearly well taken care of, and they don't breed any animals. I was very impressed with the tour and the stories behind each pack. Merilyn Prinsloo It was good to see all wolves and to be near them and learn about them George Wyatt Awesome work they are doing as a sanctuary for wolves! Eleanore Eades An experience like no other. I so loved learning about the wolves and getting to interact with them so closely. Thank you Robin and team for your dedication, passion, and knowledge. Megan Kelly Botha Our two little animal lovers, ages 5 and 8 loved it. Andrew was knowledgeable and a great guide. James Wyatt About Us The Tsitsikamma Wolf Sanctuary is a non-profit organisation in the Eastern Cape of South Africa providing a safe haven for abused and abandoned wolves.
no
Organic Farming
Are yields from organic farming lower than those from conventional farming?
yes_statement
"yields" from "organic" "farming" are "lower" than those from "conventional" "farming".. "organic" "farming" produces "lower" "yields" compared to "conventional" "farming".
https://www.nature.com/articles/nature.2012.10519
Organic farming is rarely enough | Nature
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. Organic farming is rarely enough Subjects Organic farming is sometimes touted as a way to feed the world's burgeoning population without destroying the environment. But the evidence for that has been hotly debated. Now, a comprehensive analysis of the existing science, published in Nature1, suggests that farming without the use of chemical fertilizers and pesticides could supply needs in some circumstances. But yields are lower than in conventional farming, so producing the bulk of the globe’s diet will require agricultural techniques including the use of fertilizers, the study concludes. Strawberries are among the few crops that grow almost as well on organic farms as in conventional agriculture. Credit: maxim.photoshelter.com/Alamy “I think organic farming does have a role to play because under some conditions it does perform pretty well,” says Verena Seufert, an Earth system scientist at McGill University in Montreal, Canada, and the study’s lead author. But “overall, organic yields are significantly lower than conventional yields”, she says. Area under inspection Seufert's meta-analysis reviewed 66 studies comparing the yields of 34 different crop species in organic and conventional farming systems. The researchers included only studies that assessed the total land area used, allowing them to compare crop yields per unit area. Many previous studies that have showed large yields for organic farming ignore the size of the area planted — which is often bigger than in conventional farming. Crop yields from organic farming are as much as 34% lower than those from comparable conventional farming practices, the analysis finds. Organic agriculture performs particularly poorly for vegetables and some cereal crops such as wheat, which make up the lion’s share of the food consumed around the world. Cereals and vegetables need lots of nitrogen to grow, suggesting that the yield differences are in large part attributable to nitrogen deficiencies in organic systems, says Seufert. In conventional agricultural systems, farmers apply chemical fertilizers to fields while the crops are growing, delivering key nutrients such as nitrogen when the crops need it most. Organic approaches, such as laying crop residue on the soil surface, build up nutrients over a longer period of time. “There is not the synchrony between supply of nutrients and crop demand,” says Andrew MacDonald, a soil scientist at Rothamsted Research, an agricultural-science institute in Harpenden, UK. Fruitful farming Organic approaches fare better when producing fruits such as strawberries — which have yields only 3% lower than in conventional farming — and oilseed crops such as soybean, which have 11% lower yields. Organic farmers can boost yields of less-productive crops through land-management practices, such as planting them in rotation with leguminous crops that fix nitrogen into the soil, says Seufert. “There is still a big yield difference but the study does suggest organic systems have the potential to produce comparable yields, but in a very limited number of crops,” says Sonja Vermeulen, director of research for the climate change and agricultural Copenhagen-based programme led by the Consultative Group on International Agricultural Research . The present study considered only yield differences; Seufert's next project is to analyse existing research on the environmental impacts of organic and conventional agriculture. She is also planning original field research to assess how the two systems compare in developing countries, where reliable data is lacking.
But “overall, organic yields are significantly lower than conventional yields”, she says. Area under inspection Seufert's meta-analysis reviewed 66 studies comparing the yields of 34 different crop species in organic and conventional farming systems. The researchers included only studies that assessed the total land area used, allowing them to compare crop yields per unit area. Many previous studies that have showed large yields for organic farming ignore the size of the area planted — which is often bigger than in conventional farming. Crop yields from organic farming are as much as 34% lower than those from comparable conventional farming practices, the analysis finds. Organic agriculture performs particularly poorly for vegetables and some cereal crops such as wheat, which make up the lion’s share of the food consumed around the world. Cereals and vegetables need lots of nitrogen to grow, suggesting that the yield differences are in large part attributable to nitrogen deficiencies in organic systems, says Seufert. In conventional agricultural systems, farmers apply chemical fertilizers to fields while the crops are growing, delivering key nutrients such as nitrogen when the crops need it most. Organic approaches, such as laying crop residue on the soil surface, build up nutrients over a longer period of time. “There is not the synchrony between supply of nutrients and crop demand,” says Andrew MacDonald, a soil scientist at Rothamsted Research, an agricultural-science institute in Harpenden, UK. Fruitful farming Organic approaches fare better when producing fruits such as strawberries — which have yields only 3% lower than in conventional farming — and oilseed crops such as soybean, which have 11% lower yields. Organic farmers can boost yields of less-productive crops through land-management practices, such as planting them in rotation with leguminous crops that fix nitrogen into the soil, says Seufert.
yes
Organic Farming
Are yields from organic farming lower than those from conventional farming?
yes_statement
"yields" from "organic" "farming" are "lower" than those from "conventional" "farming".. "organic" "farming" produces "lower" "yields" compared to "conventional" "farming".
https://news.berkeley.edu/2014/12/09/organic-conventional-farming-yield-gap
Can organic crops compete with industrial agriculture? | Berkeley
Can organic crops compete with industrial agriculture? The yields of organic farms, particularly those growing multiple crops, compare well to those of chemically intensive agriculture, according to a new UC Berkeley analysis. (Photo by Kristin Stringfield) A systematic overview of more than 100 studies comparing organic and conventional farming finds that the crop yields of organic agriculture are higher than previously thought. The study, conducted by UC Berkeley researchers, also found that certain practices could further shrink the productivity gap between organic crops and conventional farming. The yields of organic farms, particularly those growing multiple crops, compare well to those of chemically intensive agriculture, according to a new UC Berkeley analysis. (Photo by Kristin Stringfield) The study, to be published online Wednesday, Dec. 10, in the Proceedings of the Royal Society B , tackles the lingering perception that organic farming, while offering an environmentally sustainable alternative to chemically intensive agriculture, cannot produce enough food to satisfy the world’s appetite. “In terms of comparing productivity among the two techniques, this paper sets the record straight on the comparison between organic and conventional agriculture,” said the study’s senior author, Claire Kremen, professor of environmental science, policy and management and co-director of the Berkeley Food Institute . “With global food needs predicted to greatly increase in the next 50 years, it’s critical to look more closely at organic farming, because aside from the environmental impacts of industrial agriculture, the ability of synthetic fertilizers to increase crop yields has been declining.” The researchers conducted a meta-analysis of 115 studies — a dataset three times greater than previously published work — comparing organic and conventional agriculture. They found that organic yields are about 19.2 percent lower than conventional ones, a smaller difference than in previous estimates. . The researchers pointed out that the available studies comparing farming methods were often biased in favor of conventional agriculture, so this estimate of the yield gap is likely overestimated. They also found that taking into account methods that optimize the productivity of organic agriculture could minimize the yield gap. They specifically highlighted two agricultural practices, multi-cropping (growing several crops together on the same field) and crop rotation, that would substantially reduce the organic-to-conventional yield gap to 9 percent and 8 percent, respectively. The yields also depended upon the type of crop grown, the researchers found. There were no significant differences between organic and conventional yield gaps for leguminous crops, such as beans, peas and lentils, for instance. “Our study suggests that through appropriate investment in agroecological research to improve organic management and in breeding cultivars for organic farming systems, the yield gap could be reduced or even eliminated for some crops or regions,” said the study’s lead author, Lauren Ponisio, a graduate student in environmental science, policy and management. “This is especially true if we mimic nature by creating ecologically diverse farms that harness important ecological interactions like the nitrogen-fixing benefits of intercropping or cover-cropping with legumes.” The researchers suggest that organic farming can be a very competitive alternative to industrial agriculture when it comes to food production. “It’s important to remember that our current agricultural system produces far more food than is needed to provide for everyone on the planet,” said Kremen. “Eradicating world hunger requires increasing the access to food, not simply the production. Also, increasing the proportion of agriculture that uses sustainable, organic methods of farming is not a choice, it’s a necessity. We simply can’t continue to produce food far into the future without taking care of our soils, water and biodiversity.” A National Science Foundation Graduate Research Fellowship and a Natural Sciences and Engineering Research Postdoctoral Fellowship helped support this research.
Can organic crops compete with industrial agriculture? The yields of organic farms, particularly those growing multiple crops, compare well to those of chemically intensive agriculture, according to a new UC Berkeley analysis. (Photo by Kristin Stringfield) A systematic overview of more than 100 studies comparing organic and conventional farming finds that the crop yields of organic agriculture are higher than previously thought. The study, conducted by UC Berkeley researchers, also found that certain practices could further shrink the productivity gap between organic crops and conventional farming. The yields of organic farms, particularly those growing multiple crops, compare well to those of chemically intensive agriculture, according to a new UC Berkeley analysis. (Photo by Kristin Stringfield) The study, to be published online Wednesday, Dec. 10, in the Proceedings of the Royal Society B , tackles the lingering perception that organic farming, while offering an environmentally sustainable alternative to chemically intensive agriculture, cannot produce enough food to satisfy the world’s appetite. “In terms of comparing productivity among the two techniques, this paper sets the record straight on the comparison between organic and conventional agriculture,” said the study’s senior author, Claire Kremen, professor of environmental science, policy and management and co-director of the Berkeley Food Institute . “With global food needs predicted to greatly increase in the next 50 years, it’s critical to look more closely at organic farming, because aside from the environmental impacts of industrial agriculture, the ability of synthetic fertilizers to increase crop yields has been declining.” The researchers conducted a meta-analysis of 115 studies — a dataset three times greater than previously published work — comparing organic and conventional agriculture. They found that organic yields are about 19.2 percent lower than conventional ones, a smaller difference than in previous estimates. . The researchers pointed out that the available studies comparing farming methods were often biased in favor of conventional agriculture, so this estimate of the yield gap is likely overestimated. They also found that taking into account methods that optimize the productivity of organic agriculture could minimize the yield gap.
yes
Organic Farming
Are yields from organic farming lower than those from conventional farming?
yes_statement
"yields" from "organic" "farming" are "lower" than those from "conventional" "farming".. "organic" "farming" produces "lower" "yields" compared to "conventional" "farming".
https://www.cnn.com/2012/04/26/world/organic-food-yield/index.html
Study: Organic yields 25% lower than conventional farming | CNN
"We should not be looking for the silver bullet solution, but rather combining different approaches," says co-author CNN — Organic farming is widely perceived to be a healthy, more environmentally-friendly alternative to conventional agricultural techniques. But its role in providing for an increasingly crowded planet remains unclear with its merits hotly contested. New research looks set to refuel the debate revealing yields from organic farming to be, on average, 25% lower than conventionally-farmed produce. Reporting in the science journal Nature, researchers from Canada’s McGill University and the U.S.’s University of Minnesota say that the differences are not uniform across every crop with some performing better than others. The comprehensive analysis of current scientific literature compared 316 organic and conventional crops across 34 species from 62 study sites. We need to have a more nuanced debate about organic versus conventional agriculture Verena Seufert, McGill University Legumes (e.g. soybeans) were 11% lower while fruits were almost comparable with conventional farming with yields just 3% lower. “I think what we were able to do is identify situations where organic agriculture performs well and situations where (it) is not so good,” said co-author Verena Seufert from McGill University. “What we should do is try to address the issues and build systems that achieve high organic yields,”she added. Researchers say higher quantities of nitrogen in the soil enable organic crops to perform better while pH-neutral soils can also provide a better growing environment. Adhering to the best organic management practices can also help, closing the average yield gap with conventional farming to just 13%, according to researchers. Achieving sustainable food security will require many different farming techniques including organic, conventional and possibly “hybrid” systems, researchers say, enabling food production at affordable prices for both farmers and consumers, while limiting the impact on the environment. “We need to have a more nuanced debate about organic versus conventional agriculture. Instead of saying it’s an either/or, or it’s black and white, we need to take the best of both approaches and identify the situations that work and those that don’t,” Seufert said. As the study points out, numerous comparative studies of organic and conventional yields have already been conducted with conflicting results. What we need to do is to try and understand the arguments on both sides and assess the different options as objectively as possible But, as Seufert and colleagues point out, the findings were queried for use of data from “crops not truly under organic management and inappropriate yield comparisons.” The new study has attempted to address some of the criticisms by limiting analysis to “truly” organic systems. Food is an emotional topic, says Seufert and much more than about consuming nutrients. “There are a lot of social and cultural values that we associate with food. So many of these food debates – like meat or vegetarian diets, about local or global food systems – all of these debates are often quite heated,” she said. “What we need to do is to try and understand the arguments on both sides and assess the different options as objectively as possible, by supporting them with empirical evidence. “Maybe we should not be looking for the silver bullet solution, but rather combining different approaches and taking the best from different suggestions.” But Megan Kintzer, director of development at the Rodale Institute, an organic farm and research center in Pennsylvania, says that organic farming is a more sustainable system. “There is less energy use from organic farming, and the conventional systems produce 40% more greenhouse gases,” Kintzer said.
"We should not be looking for the silver bullet solution, but rather combining different approaches," says co-author CNN — Organic farming is widely perceived to be a healthy, more environmentally-friendly alternative to conventional agricultural techniques. But its role in providing for an increasingly crowded planet remains unclear with its merits hotly contested. New research looks set to refuel the debate revealing yields from organic farming to be, on average, 25% lower than conventionally-farmed produce. Reporting in the science journal Nature, researchers from Canada’s McGill University and the U.S.’s University of Minnesota say that the differences are not uniform across every crop with some performing better than others. The comprehensive analysis of current scientific literature compared 316 organic and conventional crops across 34 species from 62 study sites. We need to have a more nuanced debate about organic versus conventional agriculture Verena Seufert, McGill University Legumes (e.g. soybeans) were 11% lower while fruits were almost comparable with conventional farming with yields just 3% lower. “I think what we were able to do is identify situations where organic agriculture performs well and situations where (it) is not so good,” said co-author Verena Seufert from McGill University. “What we should do is try to address the issues and build systems that achieve high organic yields,”she added. Researchers say higher quantities of nitrogen in the soil enable organic crops to perform better while pH-neutral soils can also provide a better growing environment. Adhering to the best organic management practices can also help, closing the average yield gap with conventional farming to just 13%, according to researchers. Achieving sustainable food security will require many different farming techniques including organic, conventional and possibly “hybrid” systems, researchers say, enabling food production at affordable prices for both farmers and consumers, while limiting the impact on the environment. “We need to have a more nuanced debate about organic versus conventional agriculture.
yes
Organic Farming
Are yields from organic farming lower than those from conventional farming?
yes_statement
"yields" from "organic" "farming" are "lower" than those from "conventional" "farming".. "organic" "farming" produces "lower" "yields" compared to "conventional" "farming".
https://pubmed.ncbi.nlm.nih.gov/22535250/
Comparing the yields of organic and conventional agriculture
Abstract Numerous reports have emphasized the need for major changes in the global food system: agriculture must meet the twin challenge of feeding a growing population, with rising demand for meat and high-calorie diets, while simultaneously minimizing its global environmental impacts. Organic farming—a system aimed at producing food with minimal harm to ecosystems, animals or humans—is often proposed as a solution. However, critics argue that organic agriculture may have lower yields and would therefore need more land to produce the same amount of food as conventional farms, resulting in more widespread deforestation and biodiversity loss, and thus undermining the environmental benefits of organic practices. Here we use a comprehensive meta-analysis to examine the relative yield performance of organic and conventional farming systems globally. Our analysis of available data shows that, overall, organic yields are typically lower than conventional yields. But these yield differences are highly contextual, depending on system and site characteristics, and range from 5% lower organic yields (rain-fed legumes and perennials on weak-acidic to weak-alkaline soils), 13% lower yields (when best organic practices are used), to 34% lower yields (when the conventional and organic systems are most comparable). Under certain conditions—that is, with good management practices, particular crop types and growing conditions—organic systems can thus nearly match conventional yields, whereas under others it at present cannot. To establish organic agriculture as an important tool in sustainable food production, the factors limiting organic yields need to be more fully understood, alongside assessments of the many social, environmental and economic benefits of organic farming systems.
Abstract Numerous reports have emphasized the need for major changes in the global food system: agriculture must meet the twin challenge of feeding a growing population, with rising demand for meat and high-calorie diets, while simultaneously minimizing its global environmental impacts. Organic farming—a system aimed at producing food with minimal harm to ecosystems, animals or humans—is often proposed as a solution. However, critics argue that organic agriculture may have lower yields and would therefore need more land to produce the same amount of food as conventional farms, resulting in more widespread deforestation and biodiversity loss, and thus undermining the environmental benefits of organic practices. Here we use a comprehensive meta-analysis to examine the relative yield performance of organic and conventional farming systems globally. Our analysis of available data shows that, overall, organic yields are typically lower than conventional yields. But these yield differences are highly contextual, depending on system and site characteristics, and range from 5% lower organic yields (rain-fed legumes and perennials on weak-acidic to weak-alkaline soils), 13% lower yields (when best organic practices are used), to 34% lower yields (when the conventional and organic systems are most comparable). Under certain conditions—that is, with good management practices, particular crop types and growing conditions—organic systems can thus nearly match conventional yields, whereas under others it at present cannot. To establish organic agriculture as an important tool in sustainable food production, the factors limiting organic yields need to be more fully understood, alongside assessments of the many social, environmental and economic benefits of organic farming systems.
yes
Organic Farming
Are yields from organic farming lower than those from conventional farming?
yes_statement
"yields" from "organic" "farming" are "lower" than those from "conventional" "farming".. "organic" "farming" produces "lower" "yields" compared to "conventional" "farming".
https://news.climate.columbia.edu/2019/10/22/organic-food-better-environment/
Is Organic Food Really Better for the Environment? - Sustainable ...
Is Organic Food Really Better for the Environment? When you walk into any farmers’ market, you’re greeted with signs that say “Certified Organic” in bold letters. Despite being far more expensive than its non-organic counterparts, organic agriculture has become the most popular type of alternative farming, not only in the United States but also globally. According to the United States Department of Agriculture (USDA), as of 2012, organic farming accounted for 3 percent of the total sales within the country’s food industry. Even in European countries like Finland, Austria, and Germany, governments have been busy implementing plans and policies that aim to dedicate 20 percent of land area to organic farming. In South Asia, Bhutan has ambitious plans of going 100 percent organic by 2020. Meanwhile, Sikkim, a state in north-eastern India had managed to go 100 percent organic in 2016. The gradual shift towards organic farming has been mainly because we as consumers have become increasingly concerned about the health impacts of accidentally consuming pesticides and chemical fertilizers. During the 1990s, the USDA first standardized the meaning of the term “organic” — basically, farmers do not use any form of synthetic fertilizers, pesticides, herbicides, or fungicides to grow their produce. Organic farming is widely considered to be a far more sustainable alternative when it comes to food production. The lack of pesticides and wider variety of plants enhances biodiversity and results in better soil quality and reduced pollution from fertilizer or pesticide run-off. Conventional farming has been heavily criticized for causing biodiversity loss, soil erosion, and increased water pollution due to the rampant usage of synthetic fertilizers and pesticides. However, despite these glaring cons, scientists are concerned that organic farming has far lower yields as compared to conventional farming, and so requires more land to meet demand. A polarized debate Not surprisingly, the debate over organic versus conventional farming is heavily polarized in academic circles. Of late, the conversation about organic farming has shifted from its lack of chemicals to its impact on greenhouse gas emissions. In December 2018, researchers from Chalmers University of Technology published a study in the journal Nature that found that organic peas farmed in Sweden have a bigger climate impact (50 percent higher emissions) as compared to peas that were grown conventionally in the country. “Organic farming has many advantages but it doesn’t solve all the environmental problems associated with producing food. There is a huge downside because of the extra land that is being used to grow organic crops,” said Stefan Wirsenius, an associate professor at Chalmers. “If we use more land for food, we have less land for carbon sequestration. The total greenhouse gas impact from organic farming is higher than conventional farming.” Soon after the paper was published and widely covered by various news organizations globally, several researchers criticized the study. Andrew Smith, a chief scientist at the Rodale Institute, lashed out in a post saying that it was “irresponsible to extrapolate a global phenomenon based on two crops grown in one country over three years.” Smith also added that more data should be included and analyzed before making conclusions. Commenting on this, Wirsenius said, “It is true that we had a small comparison between organic versus conventional farming based on Swedish statistics. This is because Sweden is one of the very few countries that has statistics that include the yields from organic and conventional crops.” “It would have been better with bigger sample size and that is a valid concern,” he added. It is estimated that by 2050, the demand for food is going to increase by 59 to 98 percent due to the ever-increasing global population. A major challenge for the agriculture business is not only trying to figure out how to feed a growing population, but also doing so while adapting to climate change and coming up with adequate mitigation measures. Some scientists continue to be concerned that with limited land areas that will be available for farming, it might not be sustainable for industrialized countries to go 100 percent organic. A recent study published in the journal Nature Communications concludes that the widespread adoption of organic farming practices in England and Wales would lead to increases in greenhouse gas emissions. This is mainly because agricultural yields would be 40 percent lower. The researchers argued that with fewer crops being grown locally, these two countries would have to import more food supplies. However, if England and Wales did not solely rely on organic farming, and both countries’ farmers used this alternative form of farming on a smaller scale, it could result in a 20 percent reduction in carbon emissions. “For organic farming to be successful, agribusinesses would have to find the balance between the costs involved and also, its carbon footprint, while taking into consideration the overall need to meet the high demands for food,” said Alexander Ruane, a research physical scientist at NASA Goddard Institute for Space Studies and an adjunct associate research scientist at the Columbia University Center for Climate Systems Research. “That’s tough because the goal of organic farming in developed countries currently is about meeting the needs of those who can afford the luxury to buy the highest quality food. If the needs of this luxury interfere with the need to feed the entire population, then you have the potential for conflicts.” The blurry line between “good” and “bad” Making matters more complicated, some experts worry that the term “organic food” is not always properly regulated. As more large corporations get involved in organic markets, researchers claim that this shift to the mainstream has “led to the weakening of ecologically beneficial standards”. It may also limit organic farming’s ability to reduce greenhouse gas emissions. While researchers and the general public remain divided on whether organic farming is more sustainable than conventional farming, Sonali McDermid, an assistant professor at the department of environmental studies at New York University, says that it is very hard to generalize across any farming systems or label conventional or organic farming as “good” or “bad”. “They have very different manifestations, depending upon where you go,” she said. “An apt example would be the case of a farm involved in the production of organic berries in Central Valley, California. While they are not using additional land area or chemical inputs like in conventional farming, they are using other really strong inputs like sulfur,” explained McDermid. “This can be harmful to farmworkers as they need to wear proper suits and protective gear even though it is not chemically synthetic. Despite that, it is just as powerful in some cases.” McDermid is also concerned that some agribusinesses can farm uniformly without any biodiversity and still call themselves organic. Whereas in developing or emerging economies — for example in India — farmers tend to follow a far more traditional definition of organic farming. “In India, organic farms grow lots of different crops at the same time. They grow plants that can naturally keep pests away and don’t use powerful inputs like sulfur. Instead, the farmers use plants and biodiversity to help regulate their cropping systems,” said McDermid. Indian farmers who grow organic crops also make their fertilizers by filling a field with legumes that they grow in rotations. Once the legumes have fully grown, the farmers manually plow them into the ground. That results in larger quantities of nitrogen being pumped into the soil, as opposed to only using manure or even worse, synthetic fertilizers. McDermid said that in some areas of the developing world, organic farming can actually boost yields over conventional farming because it doesn’t rely on so much water and chemical inputs. These practices also build soil fertility and lead to less pollution. Experts maintain that in the heated debate over organic versus conventional farming, there needs to be more information available for consumers when it comes to labeling and even understanding the certification processes in industrialized countries like the U.S. “A huge fraction, if not the majority of organic goods sold at supermarkets in the U.S. is probably industrial,” added McDermid. For now, in the developed world, the industrialization or commercialization of organic farming has resulted in a lot of difficulty for both consumers and researchers, who are trying to understand what the goals of this booming industry are. To eat organic or not to eat organic In the U.S., even sustainability experts continue to be unsure of whether food items like fruits and vegetables with the “certified organic” labels are in fact, genuinely organic or not. McDermid said that even she sometimes feels uncertain about what to buy in the supermarket. That being said, both Wirsenius and McDermid agree that it is far more environmentally sustainable to eat organic chicken instead of beef that was produced conventionally. Yet, consuming large portions of organically produced meat will still have a bigger environmental impact than eating conventionally produced crops and fruits. Taking into consideration the high costs involved in going 100 percent organic, especially when it comes to buying fruits and vegetables, McDermid said if you can afford to spend extra, she would recommend buying them. It might also help to look for organic food that was grown locally. For instance, several community gardens grow organic vegetables that are sold in nearby farmers’ markets. Keeping that in mind, there’s no need to feel guilty or under pressure to spend extra for organic produce. “I would never put that kind of pressure on anybody. It’s really unfortunate we’re in a situation where agribusinesses focus only on yields, which makes an alternative form of farming comparatively much more expensive,” sighed McDermid. While the organic versus conventional farming debate rages on, there is one clear way to lower the environmental impact of your food, and it won’t hurt your wallet: reducing the amount of meat in your diet. Correct me if i’m wrong, but are you saying it’s good because you buy it? if i’m right than are one of the side effects bad spelling? you spelled the following words wrong, “gud”, “your”, “and”. the rest is just not english. Dear Maria, Please consider both the Price and the Cost. The price is the few cents (the delta between commercial and organic produce) but the Cost is the enduring and cumulative dangerous effects on much of the life on this planet. Which includes our children and grand children. Marty The best thing for our children and the environment is to have a lot less children. I’m one of the believers that organic farming, at least as it’s done in the west, will mean that more land will need to be converted to farm land just to feed millions of new mouths being born. Yes conventional farming can be bad for the environment but organic isn’t a long term solution, only less people is. Kristin, though the population is growing and we need to consider that, we currently produce enough food globally to feed everyone despite not everyone having access to food. The solution is not about population control so much as it is about food access. Conventionally there were no artifical poisons added to the soil. It should read traditional or chemical farming, that way there is no confusion when people choose thier food. They can choose chemicaly altered , genetically modified or artifically manured, or traditional and unaltered. This is an old discussion. We must try to look @ things in a circular way. Which is the final outcome to buy regular food (so much cheaper), buying an hamburger @ McDonald (so tasty)… A. If you are a young person, you are storing in your body-fat all the toxins that you are eating. They can increase the incidence of hormonal changes (with for ex. growth anomalies ) attention span problems, intestinal lining irritation, and a series of other correlated problems that make youth more susceptible to less healthy lives. B. If you are a woman in reproductive age you are at risk of breastfeeding your baby a milk that’s less healthy, because for breastfeeding your body prepare itself to the task using all the fat that you have stored. So more toxins you have in your fat cells, more toxines you trasmit to your growing embrio/baby. At the end: how much will it cost you to be a less healthy person/a-person-at -risk of many common patologies? And what is preferable in the long run? Just food for thought, and maybe we can start an honest and respectful discussion about it! Very true. It’s like the problems with high processed foods / non-organic, much like heavy metals in water (lead, mercury, etc) – the high processed foods mixed in with lots of synthetic chemicals gives not only unhealthy lives / costly “health care” (health care that is often more part of profiteering system as more part of problem than true “health”) yet increased costs via increased mental problems / unhealthy brain etc. functioning that correlates strongly for societal maladies like increased incidence of violence and whatnot diseases, nevermind the huge and increasing ecological costs such as destabilized climate related ecology. What leads you to that conclusion? IARC currently holds a minority viewpoint that glyphosate is a probable carcinogen. Both the US EPA and the European Environmental Agency came to the conclusion that it isn’t. The epidemiological data don’t indicate a relationship between glyphosate and cancer. Of course, it’s possible we just haven’t had sufficiently large study populations, since the cancers of interest are exceedingly rare, and research groups like mine are continuing to investigate the connection, but so far, a connection hasn’t emerged. Unfortunately, lawsuits like the one in the SF Bay Area that sided with the plaintiff skew public opinion, despite the lack of scientific evidence. I have no opinion on Monsanto or whoever else is producing glyphosate these days, it just irks me a little when people make claims that aren’t backed by data. Anyway, if there’s better literature that I’ve overlooked, please share! 🙂 Plus there are wide variety of ailments besides cancers that have largely increased (increased while we generally have been throwing trillions at non-prevention of ailments caused by “abundance of synthetics” / unhealthy practices). Not to diminish the costs and problems of cancerous results. Interesting points made in the article and challenges to organic farming methods. I think its not just important but, key and critical for us (specifically representing both science and higher education) to resist narrowing our view of food production on this planet to a comparatively fine slice view as this example held in a vacuum. I doubt any of the contributors to this effort would challenge the key role insect pollinators play in survival of most plant eating species on the planet, or the facts associated with the increasing threat on honey bee populations from non organic food production methods, though this was not mentioned at all in the article. Once the pollinators fail both methods; toxic chemical-based and organic food production will fail. That alone should be reason enough for solid science to adopt both scrutinizing examination and a healthy skepticism for methods chemically divergent from those found in our natural bio-system. When unbiased science and a logical comprehensive overview is applied in the absence of the pressure from greed-driven large corporations we’ll move toward solid life-supporting solutions Reply Jen Freudenberg 3 years ago I don’t eat organic to help the planet. I eat organic to stay healthy. Hi Jen, I think it’s equally as important of a health consideration to think of the environmental impacts; soil erosion, water pollution, greenhouse gases, are all going to impact our individual health. So it’s important to discuss the best way to farm for sustainable, AND healthy lives. Thanks for breaking down this debate on both sides of the coin. When we talk about the increase of land needed for organic farming and the challenge of keeping up with growing demand for food production, I’d like to bring up the enormous issue of food waste. The amount of food wasted by U.S. farmers, retailers, and households each year is enough to solve the global hunger problem. I recognize this is a giant, systemic issue to solve, but we can start at the consumer level by being careful to use everything we buy. Consumer practices and expectations around seasonality of produce should also be considered here. The agricultural system has developed in a way that caters to our demand to have most fruits and vegetables year-round (I have certainly been guilty of this). This leads to unnecessary mass production of crops and GHG emissions that could be avoided. Of course farmers markets help toward this cause, as they promote eating both local and seasonal produce. But farmers markets are only accessible to a sliver of society that can afford them. I’m interested to see how federal incentive programs for healthy food such as Market Match continue to develop. All to say that going organic is just one piece of this complex puzzle, and we shouldn’t lose sight of the fact that there are food deserts all over the U.S., where people can’t even access/afford fresh produce, and certainly don’t have the time or budget to be thinking about whether or not it’s organic. Great blog!….I’ve responded to it with: “Is organic food really better for the environment?” That was the title of a recent blog post recent blog by Anuradha Varanasi writing for Columbia University’s Earth Institute. This is a great question, and it refers to a study reported on in Nature in 2018 (See original article). Hi Sherry, I hear you, sometimes it feels like the “100% organic natural and pure” ideology is the only way out, but we should be diligent about how we are manifesting that ideology in the world and ask the question: “is it working?” because if it’s not then indeed we will be blindly arguing on our way out. (Especially when the arguing continues to be heavily subsidized towards the unhealthy legacy that paying “experts” to argue has largely given US / and the world in general). While ecological systems are complex it’s not rocket science, it’s pretty obvious what we as people are doing to harm ourselves and ecology in general Yes!! We have to love our planet. Our planet provides everything we need to survive. It’s long overdue that we give back to her, if she dies, we surely will. Reply Charles Rattenberg 3 years ago It’s amazing that the mountain of data that exists doesn’t influence more people on this subject. Organic production got its beginnings with an aim to protect the environment. What followed was a belief that the foods produced were better and/or healthier but there was little or no evidence to support this. There are a number of myths regarding organics, a few of which are: NO PESTICIDES ARE USED. This is false. Period. The NOP lists a chemical inventory of pesticides that are allowed in organic farming. It’s ironic that some of the most commonly used insecticides used in organic fruit and vegetable production are copper sulfates. These are less toxic than conventional insecticides, so farmers use more to control pests. To make matters worse, they don’t degrade in the soil which over the long term is terrible for the soil and the environment in general. NATURAL CROP AMENDMENTS ARE BETTER THAN SYNTHETIC CHEMICALS. Again, not true. Arsenic, cyanide, and death are all natural but no one would argue that they are desirable. Nitrogen is the largest soil input for most crops and under organic principles, the source pretty much has to come from the south end of a northbound animal. News flash: plants can’t tell (or don’t care) if the nitrogen that it needs comes from poop or ammonium nitrate. The water table does however… Compost applications regularly leach into the water table because chemical release cannot be practically controlled like it can with synthetic applications. Then there’s the yuck factor… You get the idea. ORGANIC FARMING IS MORE SUSTAINABLE THAN CONVENTIONAL FARMING. Another myth. Consider that crop yields of organic crops are about 35% lower than conventional crops (USDA data), so it follows that for every bushel of organic food produced, there is a 35% larger footprint; 35% more water is used; and more fertilizer, pesticides (yes, even for organic), and labor is used. The water table contamination issue mentioned above really comes into play here. There’s more, too. When pest pressures spike, many organic crops do not survive without conventional solutions. There is more, much more but this is not about bashing organic production. To the contrary, the intent of organic production is good but the religious fervor behind it does not reflect reality. Consider some people’s idyllic goal of having the whole world go organic. Where does will the manure come from? The amount of water needed for the additional animals alone make this a moot point, not to mention the increased carbon release. We should support organic food production and healthy lifestyles but it would be wise to move away from the cult mentality that has permeated the sector. Maybe then we can progress in a manner that can actually make a difference and become truly sustainable. “Consider this” – people in general are averaging throwing out 25 to 40% of foods (they buy and or are given). This in part due to the body does not appreciate unhealthy/ often nutritionally deficient/ overproccessed foods. People are becoming (still? – hard to know with the many surgeries that affect weight) more overweight/obese and increasingly susceptible to a huge variety of illnesses related to food (and chemical) ingestions, including the facts of nutritional deficiencies often occurring with “conventional” practices (though the organic practices/ efforts have made some feel need to practice the chemical laden practices/over tilling and processed to death foods less, because most lower processed foods are healthier and because dead soil eventually leads to earlier death in /of people’s health. Many changes observed in the environment are long term, occurring slowly over time. Organic agriculture considers the medium- and long-term effect of agricultural interventions on the agro-ecosystem. It aims to produce food while establishing an ecological balance to prevent soil fertility or pest problems. Organic agriculture takes a proactive approach as opposed to treating problems after they emerge. Reply jyot 2 years ago I agree – organic food is often a fight between good and bad, but I guess when the good outnumbers the bad, you go with the good, right! I have been buying organic food from a place called The Organic World (https://theorganicworld.com/) and I have to say it tastes better and I have seen an improvement in health too. Firstly, a bunch of thanks for sharing such valuable information & research with us. Actually, i was doing some research about USDA report on organic agricultural industry & consumption. And i get landed over your article & it was too informative. I thought the primary point of organic growing was saving the soil. We can see in the Middle East, the former fertile crescent, what happens when the soil is not cared for; the end of the civilization. This article asks the wrong question. The question isn’t “organic v conventional” but rather how do we grow nutrient dense food? I have well over a decade of direct involvement in organic agriculture plus over 50 years of growing my own food. I founded and ran an organic fertilizer company, but that was back before I had the epiphany as it relates to the previous “wrong” question. At the end of the day plants don’t care where their nutrients come from or whether the nitrogen comes out of a bag or from manure. Frankly, manure from conventional meat production is just a means to recapture some of the nutrients used to grow the feed the animals ate. What is more important than the source of the major nutrients (N,P,K) but rather all the other secondary, trace and micro nutrients plants need to produce nutrient dense, healthy produce and grazing land. Nutrient dense grasslands produce healthy animals that are naturally free from disease. And the best place to store carbon is in the soil we are grazing our animals on. Why don’t we adopt what some other countries are doing, by grazing animals for several years on land and then convert it back to tillage land? You break the parasite cycle so animals are healthier and while they are grazing the grass is sequestering carbon in the soil. We don’t live in a binary world so why do we think that all questions are either/or? Reply abdelgadir 2 years ago if u want to have your own organic farm in Sudan i can help you with that ? Reply Grace 2 years ago Thank you for this piece of information. As a small scale farmerfarmer in Kenya rural home, doing organic farming it isreally a challenge Reply Craig 2 years ago This article is lacking depth / too much focus on the shallowness often seen by “experts,” an often seen problem / result in quite a few universities. Doesn’t deeply consider the many benefits (of organic potentials) nor the many long run costs that biocides and “conventional” practices of farming have. Yet the passage about mcdermid’s input touches on the many obvious and hidden costs of biocides / “conventional”practices. Getting rid of large farm subsidies’ public funding of synthetic chemicals’ legacy (and practices that have diminished land biodiversity) as well as boosting low income people’s spending on low processed and regenerative organic foods will heal what the genuine of “the organic movement /efforts” have been put at extremely unfair market disadvantage by practices that governments/corporate food processors/CAFO operators that have long generally pushed unhealthy practices/products with much subsidy of various unhealthy sorts. Reply Craig 2 years ago Weakening of the standards / weak certifiers of organics is a major concern / problem the (true) organic efforts by some continue to face. This alongside legacy of highly subsidized food production for nearly 100 years makes decent legitimate efforts have to compete with toxic efforts of (the still too) many – it’s incredible how much the people of truly organic efforts have overcome when faced with negatives of both corporate agra / many governments’ predecessors. On my second read this “weakening of standards” stood out – very good article. Reply Ecromancer 2 years ago One of the things to consider is GMO. Since they use a protein that is harmless to humans but deadly to bugs. Also they can even take a ton of carbon from the air to reverse climate change. Another thing is they can be bigger and more nutritious. So if anything GMO is better for you and the environment. So if you see something that says GMO. Buy it because it is most likely better for every living thing! Reply Ecromancer 2 years ago If you want to msg me you can reply I would like to see reasons for non-GMO I want to see a good reason although most people who r anti-GMO do not make good arguments. so please do Reply George Davis 2 years ago Whoever wrote this article is completely clueless as to how organic farming actually works. Certified organic doesn’t mean pesticide free, organic crops get sprayed with non synthetic pesticides approximately 5X times as often as conventional. Organic crops get fertilized predominantly with with animal waste and slaughterhouse byproducts. The runoff into the waterways is harmful to the environment. Organic crops have to be cultivated regularly causing erosion problems. Reply 22gz 1 year ago i am very worried about the carbon footprint being placed on this world, chemical farming is not sustainable for the environment and this form of sustainable farming is refreshing for the environment Reply person 6 months ago actually I like conventional food because it’s cheaper and studies have shown that organic and conventional farming is the same risk of hurting the environment. Topics Research Centers & Programs Authors Archives State of the Planet is a forum for discussion on varying viewpoints. The opinions expressed by the authors and those providing comments are theirs alone, and do not necessarily reflect the opinions of the Earth Institute or Columbia University. This website uses cookies as well as similar tools and technologies to understand visitors' experiences. By continuing to use this website, you consent to Columbia University's usage of cookies and similar technologies, in accordance with the Columbia University Website Cookie Notice.Close
During the 1990s, the USDA first standardized the meaning of the term “organic” — basically, farmers do not use any form of synthetic fertilizers, pesticides, herbicides, or fungicides to grow their produce. Organic farming is widely considered to be a far more sustainable alternative when it comes to food production. The lack of pesticides and wider variety of plants enhances biodiversity and results in better soil quality and reduced pollution from fertilizer or pesticide run-off. Conventional farming has been heavily criticized for causing biodiversity loss, soil erosion, and increased water pollution due to the rampant usage of synthetic fertilizers and pesticides. However, despite these glaring cons, scientists are concerned that organic farming has far lower yields as compared to conventional farming, and so requires more land to meet demand. A polarized debate Not surprisingly, the debate over organic versus conventional farming is heavily polarized in academic circles. Of late, the conversation about organic farming has shifted from its lack of chemicals to its impact on greenhouse gas emissions. In December 2018, researchers from Chalmers University of Technology published a study in the journal Nature that found that organic peas farmed in Sweden have a bigger climate impact (50 percent higher emissions) as compared to peas that were grown conventionally in the country. “Organic farming has many advantages but it doesn’t solve all the environmental problems associated with producing food. There is a huge downside because of the extra land that is being used to grow organic crops,” said Stefan Wirsenius, an associate professor at Chalmers. “If we use more land for food, we have less land for carbon sequestration. The total greenhouse gas impact from organic farming is higher than conventional farming.” Soon after the paper was published and widely covered by various news organizations globally, several researchers criticized the study.
yes
Organic Farming
Are yields from organic farming lower than those from conventional farming?
yes_statement
"yields" from "organic" "farming" are "lower" than those from "conventional" "farming".. "organic" "farming" produces "lower" "yields" compared to "conventional" "farming".
https://www.ers.usda.gov/amber-waves/2015/november/despite-profit-potential-organic-field-crop-acreage-remains-low/
Despite Profit Potential, Organic Field Crop Acreage ... - USDA ERS
Despite Profit Potential, Organic Field Crop Acreage Remains Low Highlights: USDA survey data show that organic systems had lower yields and higher total economic costs than conventional systems. Organic corn and soybeans have been profitable, primarily due to the significant price premiums paid for certified organic crops that more than offset the additional economic costs. Organic wheat has been less profitable. Despite potentially higher returns, adoption of the organic approach among U.S. field crop producers remains low, likely due to low crop yields and challenges of effective weed control, among other factors. U.S. crop acres under USDA certified organic systems have grown rapidly since the National Organic Program (NOP) was implemented in 2002. Organic crop acreage increased from about 1.3 million to almost 3.1 million acres between 2002 and 2011. While acreage for some major field crops increased substantially, growth was modest for others. Among three major field crops—corn, soybeans, and wheat—certified organic production of corn increased the most. Certified organic wheat acres were the highest, but declined after 2009. Recent USDA survey data show corn acreage up 24 percent, soybean acreage up 3 percent, but wheat acreage down 3 percent between 2011 and 2014. Despite the strong interest in organic food in the United States, overall adoption of organic corn, soybeans, and wheat remains low, standing at less than 1 percent of the total acreage of each crop. One reason for the low levels of organic adoption among U.S. field crop producers may be a lack of information about the relative costs and returns of organic and conventional production systems on commercial farms, and the performance of farms choosing the organic approach. Researchers have studied organic crop production in a long-term experimental setting, but little has been reported about the commercial production of organic field crops. To cast light on this issue, ERS researchers used actual farm data to estimate the difference in costs of production that can be attributed to producing certified organic crops and use these estimates to calculate the price premiums that make organic systems profitable when compared with conventional systems (see, “Data and Production Costs” box). Organic Yields Lower Than Conventional Data from long-term cropping system experiments in Iowa, Pennsylvania, and other States suggest that organic crop production can bring significant returns. The data show similar conventional and organic yields and lower organic production costs. However, farm data from USDA producer surveys show organic crop yields to be much lower than those of conventional production. The yield differences estimated from USDA farm data are similar to those estimated by comparing USDA’s 2011 Certified Organic Production Survey with USDA’s 2011 Crop Production Report. These data show organic corn yields to be 41 bushels per acre less than conventional yields, organic wheat yields to be 9 bushels per acre less, and organic soybean yields to be 12 bushels per acre less. In USDA organic surveys, producers reported that achieving yields was one of the most difficult aspects of organic production. The yield differences revealed by survey data may be due to the unique problems encountered by organic systems outside of the experimental setting, such as effective weed control. Genetically modified conventional seed varieties commonly used for corn and soybean production may also be higher yielding than standard organic seed varieties. In some organic field crop systems, such as for wheat and soybeans, lower yields may be due to the high percentage of organic growers who use lower yielding food-grade varieties. Most of the organic wheat and soybean production is used to make food items. Most organic corn is fed to livestock. Organic Costs More Per-Bushel Similar to findings from experimental data, which primarily examine only operating costs, analysis of USDA survey data show mean operating, and operating plus capital costs per acre for crop production were generally less for organic than for conventional farms. For example, total operating costs and operating plus capital costs per acre for organic corn were about $80 and $50 per acre lower, respectively, than for conventional corn. The mean difference in total economic costs per acre was not significant, but the composition of costs varied substantially for conventional and organic corn. Conventional corn growers had significantly higher seed, fertilizer, and chemical costs than organic growers, but lower costs for fuel, repairs, capital, and labor, as organic systems substituted manure and field operations for fertilizers and chemicals. Organic producers had higher fuel and capital costs because they used more field operations, particularly for tillage. Labor costs for organic production were also significantly higher. Mean total economic costs per bushel were significantly higher among the organic crop farms due largely to lower crop yields. Researchers used alternative statistical methods to measure the cost difference between organic and conventional corn, wheat, and soybean production from farm survey data as if they were in an experimental setting. Organic transition and certification cost estimates were then added to the measured cost differences. Organic Corn and Soybean Production Had Higher Returns Comparison of the additional costs associated with organic production with historic price premiums (the difference between organic and conventional crop prices) provides an indication of the returns associated with organic field crop production. Organic corn prices ranged between about $5 to $10 per bushel higher than conventional corn prices during 2011-14, while the economic cost difference was $1.92 to $2.27 higher, indicating significant profit potential from organic corn. Likewise, organic soybean prices averaged about $10 to $15 per bushel higher than conventional soybeans during the same period, creating price premiums high enough to easily cover the additional economic costs of $6.62 to $7.81 per bushel of organic soybean production. The gap between average organic and conventional wheat prices depended on the type of wheat produced. Throughout 2011-14, price premiums for organic food wheat increased, reaching above $10 per bushel, much higher than the economic cost differential of $3.90 to $4.46 per bushel between organic and conventional wheat production. However, farm prices of organic feed wheat were only $1-$4 per bushel higher than those for conventional wheat, often below the additional economic costs of organic wheat production. The yield, price, and cost differences were used to estimate the per acre returns to organic versus conventional production for each crop. Average additional economic costs of $83 to $98 per acre for corn, $55 to $62 per acre for wheat, and $106 to $125 per acre for soybeans are incurred from organic production. These cost estimates are based on the farm survey yield and cost data. Estimates of the average difference in net returns per acre for organic versus conventional production were positive and highest for corn ($51 to $66 per acre), followed by soybeans ($22 to $41 per acre), but negative for wheat (-$9 to -$2 per acre). Organic production costs are higher than conventional costs, but higher prices received for organic crops more than offset the higher costs for organic corn and soybeans, although not for organic wheat Crop Difference between organic and conventional economic costs ($ per bushel) Difference between organic and conventional economic costs ($ per acre) Difference between organic and conventional number of returns above economic costs Corn 1.92 to 2.27 83 to 98 51 to 66 Wheat 3.90 to 4.46 55 to 62 -9 to -2 Soybeans 6.62 to 7.81 106 to 125 22 to 41 Source: USDA, Economic Research Service calculations using Agricultural Resource Management Survey data and include production cost differences plus organic transition and certification costs. The range of costs and returns was generated from alternative statistical methods. Despite Potentially Higher Returns, Organic Acreage Remains Low The main reason that organic returns were higher than conventional returns was the price premiums paid for organic crops. Price premiums received for organic crops were generally above the estimated additional economic costs of organic production for most crops during 2011-14. Estimates of the difference in net returns per acre for organic versus conventional production showed positive economic profit for organic corn and soybeans relative to conventional crops, consistent with expanded organic acreage of those two crops in recent years. Estimates of an economic loss per acre for organic versus conventional wheat is consistent with the recent decline in organic wheat acreage. Despite these potentially higher returns from organic production, adoption of the organic approach among U.S. field crop producers remains low. One possible reason is the ease of producing for the conventional market. Seed and chemicals are readily available from local seed and chemical company dealers, and conventional products can be sold at the local elevator. Organic farmers, in contrast, have to secure organic seed; learn to manage soil fertility, weeds, and other pests through natural methods; and find their own markets to sell crops, which may require storage on the farm until pickup. Thus organic farming requires more on-farm management. The low level of U.S. organic crop adoption may also be due to variations in climatic and market conditions. Organic production is more attractive where crop pests are fewer, such as in northern States. Also, a market for the more expensive organic food or feed crops is required, such as the demand for organic feed ingredients from the expanding organic dairy industry in States of the upper Midwest and Northeast. These factors may have limited the area where organic systems are potentially profitable. <a name='sidebar'>Data and Production Costs</a> Data used in this study come from USDA’s 2010, 2009, and 2006 Agricultural Resource Management Survey (ARMS) administered by the National Agricultural Statistics Service (NASS) and ERS. This study uses ARMS data that include information about the production practices and costs of U.S. commodity production—corn in 2010, wheat in 2009, and soybeans in 2006. Each survey targeted producers in States that included over 90 percent of U.S. planted acreage of the commodity in each year. Production costs are divided into operating costs, operating plus capital costs, and total economic costs. Operating costs include costs for seed; fertilizer; chemicals; custom operations; fuel, lubrication, and electricity; repairs; purchased irrigation water; hired labor; and operating interest. Capital costs include the annualized cost of maintaining the capital used in production, and costs for non-real estate property taxes and insurance. Total economic costs are the sum of operating and capital costs, plus opportunity costs (what these resources could have earned in their best alternative use) for land and unpaid labor, and allocated costs for general farm overhead items.
Organic Yields Lower Than Conventional Data from long-term cropping system experiments in Iowa, Pennsylvania, and other States suggest that organic crop production can bring significant returns. The data show similar conventional and organic yields and lower organic production costs. However, farm data from USDA producer surveys show organic crop yields to be much lower than those of conventional production. The yield differences estimated from USDA farm data are similar to those estimated by comparing USDA’s 2011 Certified Organic Production Survey with USDA’s 2011 Crop Production Report. These data show organic corn yields to be 41 bushels per acre less than conventional yields, organic wheat yields to be 9 bushels per acre less, and organic soybean yields to be 12 bushels per acre less. In USDA organic surveys, producers reported that achieving yields was one of the most difficult aspects of organic production. The yield differences revealed by survey data may be due to the unique problems encountered by organic systems outside of the experimental setting, such as effective weed control. Genetically modified conventional seed varieties commonly used for corn and soybean production may also be higher yielding than standard organic seed varieties. In some organic field crop systems, such as for wheat and soybeans, lower yields may be due to the high percentage of organic growers who use lower yielding food-grade varieties. Most of the organic wheat and soybean production is used to make food items. Most organic corn is fed to livestock. Organic Costs More Per-Bushel Similar to findings from experimental data, which primarily examine only operating costs, analysis of USDA survey data show mean operating, and operating plus capital costs per acre for crop production were generally less for organic than for conventional farms. For example, total operating costs and operating plus capital costs per acre for organic corn were about $80 and $50 per acre lower, respectively, than for conventional corn.
yes
Organic Farming
Are yields from organic farming lower than those from conventional farming?
yes_statement
"yields" from "organic" "farming" are "lower" than those from "conventional" "farming".. "organic" "farming" produces "lower" "yields" compared to "conventional" "farming".
https://www.pbs.org/newshour/science/how-more-organic-farming-could-worsen-global-warming
How more organic farming could worsen global warming | PBS ...
How more organic farming could worsen global warming For decades, the conventional wisdom surrounding organic farming has been that it produces crops that are healthier and better for the environment as a whole. In the U.S., where organic food sales totaled nearly $50 billion last year and made up 5.7 percent of total food sales, companies such as Annie’s and Organic Valley market their products as leaving a low carbon footprint. They remind consumers that their ingredients “matter…to the planet we all share,” or that their farming practices “remove excess carbon dioxide from the air.” The International Federation of Agriculture Movements promises in its literature that organic farming can “help reduce greenhouse gas emissions within the agricultural sector of the European Union and beyond.” But a new study out this week challenges this narrative, predicting that a wholesale shift to organic farming could increase net greenhouse gas emissions by as much as 21 percent. “We’re not saying that organic is wrong,” said Adrian Williams, an associate professor of environmental systems at Cranfield University in the U.K., but that consumers and environmental organizations would be wise to consider what these farming practices would look like on a much larger scale before making assumptions about the environmental impacts. Williams worked on the study published in Nature Communications on Tuesday. While it’s unlikely that any country will pursue a complete, 100 percent transition to organic farming anytime soon, the study falls in line with others that raise questions about the degree to which these practices can mitigate the effects of climate change — and how market forces limit their ability to do so. What would a shift to 100 percent organic look like? Much research has been done about the link between organic farming and greenhouse gas emissions in smaller, niche settings, from grassland farms in Southern Germany to suckler-beef producers in Ireland. Results have been varied — while organic farming practices lowered greenhouse gases in some scenarios, in others, emissions grew or remained constant. A team at Cranfield University sought to expand this scope of research by predicting how far the food supply would carry if England and Wales made a switch to 100 percent organic farming. “The question was, how much could we produce using only organic methods?” Williams said. Forty percent less, it turns out. Organic farming typically produces lower crop yields due to factors such as the lower potency fertilizers used in the soil, which are limited to natural sources such as beans and other legumes. Williams’ model found that a 100 percent organic farming system in England and Wales would mean much smaller crop yields. For wheat and barley, for example, their production would be halved relative to conventional farming. “Having established that there would be a shortfall in massive production, the gap would be filled by increased imports, ” Williams said. If we try to have the same diet and convert to organic, we can’t really do it without expanding agricultural land demands. This outcome could lead to a 21 percent rise in greenhouse gas emissions from England and Wales because those imports would likely be raised overseas through conventional agriculture. Such a transition would render moot the potential reductions in greenhouse gas emissions that would otherwise be achieved by the switch. Even though the Cranfield study is hypothetical in nature, environmental sociologist Julius McGee said “it’s a useful tool to pick apart agriculture’s relationship to climate change.” McGee took a similar approach back in 2015, when he authored a study that found the rise of certified organic production in the United States did not correlate with declines in greenhouse gas emissions. Governments and organizations should consider these driving market forces more carefully when touting the potential environmental benefits of organic farming, he said. “The goal of agriculture is not to produce enough food to feed people, it’s to make the most money,” said McGee, who works at Portland State University and wasn’t involved in the Cranfield study. “I was trying to get people to look beyond the elements of consumer society. Organic is a niche market, and it’s able to make a certain amount of money based on people’s desire to consume organically produced goods.” Will profits prevent an organic cleanup? Some scientists posit that as long as agriculture remains focused primarily on profit, organic farming will only have a minimal impact on environmental protection and reducing climate change. Michel Cavigelli, a soil scientist with the U.S. Department of Agriculture, works with farmers in the mid-Atlantic who are seeking to convert to organic farming. He said while the farmers in this region express concerns about the environmental harms and impacts of the agrochemicals used in conventional farming, the reason they decide to switch to organic practices is often partly driven by economics. Market demand for organic products is expected to reach $70 billion by 2025, making these crops more profitable in the long run. “In general, it’s accepted that you are going to have lower yields, but the price premium makes up for that on the economic side, from a farmer’s perspective,” said Cavigelli, who wasn’t involved in the Cranfield study. He added “they’ll live and die” by their bottom line, not their yields. Cavigelli also noted that while the USDA has had standards for labeling organic products for more than 20 years and its creation was as much about market demand as anything. “USDA doesn’t say that organic is better or worse,” Cavigelli said. “There’s a public demand for it, we need to meet that need. That’s kind of been USDA policy since 1997.” Adrian Williams of Cranfield University said the U.K. could not sustain a switch to 100 percent organic with the national diet the way it currently is, but that might not be the case if market demand for certain foods changed. “The real message is that if we try to have the same diet and convert to organic, we can’t really do it without expanding agricultural land demands, simply because it yields less than the current system,” Williams said, adding that testing a model where consumers sought out less red meat and more plant-based foods and fish could result in lower greenhouse gas emission yields from organic farming. Organic farms and the regenerative movement face a long road to sustainability Proponents of organic farming acknowledge the issue of low crop yields raised by the Cranfield Study, but maintain that farmers can still find ways to reduce their carbon footprint by focusing on “regenerative practices.” Erin Callahan, director of the Climate Collaborative, based in Vermont — an organization that seeks to reverse the emissions pollution effects created by climate change in the natural food industry — recognizes that “the yield question is a big one” when it comes to mitigating the harmful environmental effects of agriculture, but warns against reducing the discussion to a matter of “organic versus conventional.” Shifting all global cropland to a regenerative model could cut annual CO2 emissions by more than 100 percent. “Making the food system more efficient, wasting less food, and trying to shrink the gap in yield…is the right method forward if we actually want to have agriculture be the solution for climate change,” Callahan said, who said that the current food system in the U.S. would have to change in order for organics to make a significant impact on reversing the effects of climate change. Callahan’s organization advocates for companies like General Mills — which pledged in March to regenerate 1 million acres of farmland by 2030 — to find ways to capture more carbon in their soil, even if that doesn’t mean switching over entirely to organic practices. As part of their initiative, General Mills launched a regenerative agriculture scorecard for farmers to assess their soil. They also tested regenerative practices on one of their partner pastures that resulted in 68 percent less greenhouse gas emissions. There is evidence that these practices do work to cut down on greenhouse gas emissions in certain controlled situations. A widely cited white paper by the Rodale Institute in Pennsylvania found that shifting all global cropland to a regenerative model could cut annual CO2 emissions by more than 100 percent. (Reminder: The planet will likely need to achieve a state of negative emissions to waylay climate change.) “Organic is a really important piece of the puzzle when you’re looking at how to fix the food system,” Callahan said. “But until then, introducing regenerative practices of any kind to do that can help.” Left: For decades, the conventional wisdom surrounding organic farming has been that it produces crops that are healthier and better for the environment as a whole. A new study out this week challenges this narrative. Photo by REUTERS/Enrique Castro-Mendivil
“The question was, how much could we produce using only organic methods?” Williams said. Forty percent less, it turns out. Organic farming typically produces lower crop yields due to factors such as the lower potency fertilizers used in the soil, which are limited to natural sources such as beans and other legumes. Williams’ model found that a 100 percent organic farming system in England and Wales would mean much smaller crop yields. For wheat and barley, for example, their production would be halved relative to conventional farming. “Having established that there would be a shortfall in massive production, the gap would be filled by increased imports, ” Williams said. If we try to have the same diet and convert to organic, we can’t really do it without expanding agricultural land demands. This outcome could lead to a 21 percent rise in greenhouse gas emissions from England and Wales because those imports would likely be raised overseas through conventional agriculture. Such a transition would render moot the potential reductions in greenhouse gas emissions that would otherwise be achieved by the switch. Even though the Cranfield study is hypothetical in nature, environmental sociologist Julius McGee said “it’s a useful tool to pick apart agriculture’s relationship to climate change.” McGee took a similar approach back in 2015, when he authored a study that found the rise of certified organic production in the United States did not correlate with declines in greenhouse gas emissions. Governments and organizations should consider these driving market forces more carefully when touting the potential environmental benefits of organic farming, he said. “The goal of agriculture is not to produce enough food to feed people, it’s to make the most money,” said McGee, who works at Portland State University and wasn’t involved in the Cranfield study. “I was trying to get people to look beyond the elements of consumer society.
yes
Organic Farming
Are yields from organic farming lower than those from conventional farming?
yes_statement
"yields" from "organic" "farming" are "lower" than those from "conventional" "farming".. "organic" "farming" produces "lower" "yields" compared to "conventional" "farming".
https://rodaleinstitute.org/science/farming-systems-trial/
Farming Systems Trial - Rodale Institute
Farming Systems Trial The Farming Systems Trial was launched in 1981 with a clear goal: Address the barriers to the adoption of organic farming by farmers. For more than 40 years, the Farming Systems Trial (FST) at Rodale Institute has applied real-world practices and rigorous scientific analysis to document the different impacts of organic and conventional grain cropping systems. The scientific data gathered from this research has established that organic management matches or outperforms conventional agriculture in ways that benefit farmers and lays a strong foundation for designing and refining agricultural systems that can improve the health of people and the planet. This material is based upon work supported by the William Penn Foundation under Grant Award Number 188-17. The opinions expressed in this publication are those of the author(s) and do not necessarily reflect the views of the William Penn Foundation. Our decades-long research shows: Organic yields match conventional yields for cash crops, such as corn and soybean. Organic management increases water infiltration and does not contribute to the accumulation of toxins in waterways. Even without the premiums paid for organic crops, the organic manure system is the most profitable system Organic system operation cost is significantly lower than conventional management. The Systems The FST compares three core farming systems: a chemical input-based conventional system, a legume-based organic system, and a manure-based organic system. Corn and soybean production is the focus of each system because 70 percent of U.S. acreage is devoted to growing grain. In 2008, each core system was further divided to compare standard full-tillage (FT) and emerging reduced-tillage (RT) practices. At that time, genetically modified corn and soybeans were also introduced to the conventional system to mirror common practices. Conventional Synthetic This system represents a typical U.S. grain farm. It relies on synthetic nitrogen for fertility, and weeds are controlled by synthetic herbicides selected and applied at rates recommended by Penn State University Cooperative Extension. Organic Legume This system represents an organic cash grain system. It features a mid-length rotation consisting of annual grain crops and cover crops. The system’s sole source of fertility is leguminous cover crops, and crop rotation provides the primary line of defense against pests. Organic Manure This system represents a diversified organic dairy or beef operation that includes a long rotation of annual feed grain crops and perennial forage crops. Fertility is provided by leguminous cover crops and periodic applications of composted manure from livestock. A diverse crop rotation is the primary line of defense against pests. FST Findings The FST team has been gathering a wide variety of data from the research plots for more than 40 years and thoroughly analyzing it using widely accepted scientific standards. The results indicate that organic farming systems match or outperform conventional production in yield, while providing a range of agronomic, economic, and environmental benefits for farmers, consumers, and society. Soils Carbon Capture Water Yields Profits Soils FST data has established that soil health in the organic systems has continued to increase over time while the soil in the conventional systems has remained essentially unchanged. Cornell comprehensive assessment of soil health (CASH) score of each of the systems in the Farming Systems Trial in 2019 and 2020. Carbon Capture Healthy soil holds carbon and keeps it out of the atmosphere. Organic systems usually have much more diverse carbon inputs going into the soil so microbial biomass is significantly higher than in the conventional system, leading to higher soil organic matter over time. Soil microbial biomass carbon (average of 0–10, 10–20, and 20–30 cm depths) of each of the systems in the Farming Systems Trial in 2018. (Adapted from Littrell et al., 2021.) Water Water infiltration is significantly faster under long-term organic management compared to conventional practices. Average water infiltration rates in each of the systems in the FST from 2019–2021. Yields Organic systems produce yields of cash crops equal to conventional systems, except in extreme weather conditions, such as drought, when the organic plots sustained their yields while the conventional plots declined. Overall, organic corn yields have been 31 percent higher than conventional production in drought years. Average corn yield of each of the systems in the Farming Systems Trial from 2008–2020 (Figure A) and corn yield in 2016 (Figure B) which was an especially dry season. Figure AFigure B Profits An analysis of the cumulative labor, costs, returns, and risk for the three systems shows that the organic manure system is the most profitable for farmers, even without the price premiums paid for organic crops. With current organic price premiums, both organic systems are much more profitable than the conventional system. Net returns (Figure A, without organic price premiums; Figure B, with organic price premiums) of each of the systems in the Farming Systems Trial from 2008–2020. Budgets are for representative farms 54 hectares in size. Figure AFigure B The Value of Healthy Soil Why is healthy soil so important? Peak Nutrition Soil is the foundation to food production and growing healthy, nutrient-rich food to sustain a growing population. Drought Protection Healthy soil holds moisture until plants need it and creates symbiosis with fungi to extend the root network deeper into the soil. Erosion Prevention The “aggregates” in healthy soil stick together and don’t wash or blow away. Disease Defense Active soil microbes ward off plant diseases. Flood Resistance Healthy soil absorbs more water at a faster rate, reducing flooding and runoff. Carbon Capture Healthy soil holds carbon and keeps it out of the atmosphere. The Farming Systems Trial was started by Bob Rodale, who wanted scientific backing for the recommendations being made to the newly forming National Organic Program in the 1980s. Today, the trial is divided into a total of 72 experimental plots.
Yields Organic systems produce yields of cash crops equal to conventional systems, except in extreme weather conditions, such as drought, when the organic plots sustained their yields while the conventional plots declined. Overall, organic corn yields have been 31 percent higher than conventional production in drought years. Average corn yield of each of the systems in the Farming Systems Trial from 2008–2020 (Figure A) and corn yield in 2016 (Figure B) which was an especially dry season. Figure AFigure B Profits An analysis of the cumulative labor, costs, returns, and risk for the three systems shows that the organic manure system is the most profitable for farmers, even without the price premiums paid for organic crops. With current organic price premiums, both organic systems are much more profitable than the conventional system. Net returns (Figure A, without organic price premiums; Figure B, with organic price premiums) of each of the systems in the Farming Systems Trial from 2008–2020. Budgets are for representative farms 54 hectares in size. Figure AFigure B The Value of Healthy Soil Why is healthy soil so important? Peak Nutrition Soil is the foundation to food production and growing healthy, nutrient-rich food to sustain a growing population. Drought Protection Healthy soil holds moisture until plants need it and creates symbiosis with fungi to extend the root network deeper into the soil. Erosion Prevention The “aggregates” in healthy soil stick together and don’t wash or blow away. Disease Defense Active soil microbes ward off plant diseases. Flood Resistance Healthy soil absorbs more water at a faster rate, reducing flooding and runoff. Carbon Capture Healthy soil holds carbon and keeps it out of the atmosphere. The Farming Systems Trial was started by Bob Rodale, who wanted scientific backing for the recommendations being made to the newly forming National Organic Program in the 1980s.
no
Organic Farming
Are yields from organic farming lower than those from conventional farming?
yes_statement
"yields" from "organic" "farming" are "lower" than those from "conventional" "farming".. "organic" "farming" produces "lower" "yields" compared to "conventional" "farming".
https://link.springer.com/article/10.1007/s13593-018-0489-3
Risks and opportunities of increasing yields in organic farming. A ...
Abstract Current organic agriculture performs well in several sustainability domains, like animal welfare, farm profitability and low pesticide use, but yields are commonly lower than in conventional farming. There is now a re-vitalized interest in increasing yields in organic agriculture to provide more organic food for a growing, more affluent population and reduce negative impacts per unit produced. However, past yield increases have been accompanied by several negative side-effects. Here, we review risks and opportunities related to a broad range of sustainability domains associated with increasing yields in organic agriculture in the Northern European context. We identify increased N input, weed, disease and pest control, improved livestock feeding, breeding for higher yields and reduced losses as the main measures for yield increases. We review the implications of their implementation for biodiversity, greenhouse gas emissions, nutrient losses, soil fertility, animal health and welfare, human nutrition and health and farm profitability. Our findings from this first-of-its-kind integrated analysis reveal which strategies for increasing yields are unlikely to produce negative side-effects and therefore should be a high priority, and which strategies need to be implemented with great attention to trade-offs. For example, increased N inputs in cropping carry many risks and few opportunities, whereas there are many risk-free opportunities for improved pest control through the management of ecosystem services. For most yield increasing strategies, both risks and opportunities arise, and the actual effect depends on management including active mitigation of side-effects. Our review shows that, to be a driving force for increased food system sustainability, organic agriculture may need to reconsider certain fundamental principles. Novel plant nutrient sources, including increased nutrient recycling in society, and in some cases mineral nitrogen fertilisers from renewable sources, and truly alternative animal production systems may need to be developed and accepted. 1 Introduction Consumer demand for organic products has increased dramatically in the recent past, with global sales increasing more than threefold (although from low levels) since the turn of the century (Reganold and Wachter 2016). Some countries in Northern Europe are currently witnessing a boom in sales of organic foods. In Sweden, sales increased by 18% in 2016 compared with the previous year, with organic products now constituting 8.7% of total food sales (Ekoweb 2017). Organic agriculture emerged as a reaction to the industrialisation of agriculture and its associated environmental and social problems. Whether organic agriculture actually delivers overall advantages over conventional agriculture is however contentious. Some claim that organic farming systems are more profitable and environmentally friendly (Reganold and Wachter 2016), while others question the role of organic agriculture in future sustainable food systems (Connor and Mínguez 2012). The main criticism of organic agriculture is its lower productivity at a time when food production has to increase substantially to feed a growing, more affluent global population. Critics consider organic agriculture inefficient, especially in terms of land use. With the rising global demand for food, they point out that current agricultural land will not suffice and further expansion of agricultural land into pristine ecosystems will result from the expansion of organic agriculture (Kirchmann et al. 2009, Connor and Mínguez 2012). Others suggest that ‘ecological intensification’ of crop production systems, i.e. utilisation and management of ecosystem services delivered by biodiversity, rather than anthropogenic inputs, is the best option to sustainably meet future food demand while reducing environmental pressures (Bommarco et al. 2013; Ponisio et al. 2015). Controversies aside, both critics and many proponents of organic agriculture share the common view that yields in organic agriculture have to increase. For the organic movement, the Organic 3.0 initiative (the next stage of development in organic farming) has reignited the debate on the need to increase yields, as it includes an ambition for organic farming to be considered a major, rather than a niche, solution to sustainable farming (IFOAM 2015). Others highlight the need for organic agriculture to increase yields in order to become more ‘environmentally efficient’ since, although organic agriculture is usually associated with lower environmental burdens per hectare compared with conventional farming, adverse impacts are often similar or higher on a per kilogram of product basis due to lower outputs (Clark and Tilman 2017). According to its principles, the aims of organic farming go beyond food production to include caring for and protecting the environment (landscapes, climate, habitats, biodiversity, air and water) and the wellbeing of people and animals (IFOAM 2005). It is thus highly relevant to enquire how a focus on increased yields will affect reaching these wider goals. Seufert and Ramankutty (2017) provide the latest comprehensive review on the costs and benefits of organic agriculture in its current form and conclude that, on the positive side, organic agriculture delivers higher biodiversity and improved soil and water quality per unit area, enhanced profitability and higher food nutritional value. On the negative side, there are many costs, including lower yields and higher consumer prices. How will this change when different strategies to increase yields are implemented? The aim of this review is to shed some light on this question. We highlight and analyse possible risks and opportunities related to a broad range of sustainability aspects when aiming to increase yields in organic agriculture. As organic agriculture varies considerably across the globe, we focus our analysis to the context of Northern Europe, using examples from Sweden to illustrate our case. We end this review by summarising our findings and critically reflecting on how organic practices based on current EU regulations (EU 2014) affect the possibility of sustainably increasing yields. The review is structured as follows. Chapter 2 provides the background and includes an overview of organic yields compared with conventional yields, including an outline of factors that limit yields, and strategies to increase yields in organic agriculture. Chapter 3 summarises how striving for increased yields in organic production could affect the following areas: biodiversity, emissions of greenhouse gases (GHG), nutrient losses, soil fertility, animal welfare and health, human nutrition and health, and farm profitability (Fig. 1). For each topic, we start with a brief introduction to the area to cater for the wide audience of this paper due to its broad coverage and to justify inclusion of the area in the review. Based on published research, we then discuss and critically reflect upon how increasing yields through increased inputs, genetic improvement and applying best available management practices will affect this area. The overarching conclusions from the review are summarised and discussed in Chapter 4. In this review, we use the following definitions of yield commonly used in practice and in research, e.g. in field trials, breeding, evaluation of feeding strategies, production metrics etc. For crop production, the yield is defined as the amount of crop harvested from the field per unit area and year. As for livestock production, the yield concept is more complex and commonly include both production per animal and time and feed use (amount and type) to produce one unit of animal product; here we include both in our discussion. 2 Yields in organic production 2.1 Crop production Recent meta-analyses with global coverage show that organic crop yields are on average 80% (de Ponti et al. 2012), 66–95% (Seufert et al. 2012) or 81% (Ponisio et al. 2015) of conventional yields. Yield differences vary considerably with growing conditions, management practices and crop types, with legumes showing a considerably smaller yield gap than cereals or tubers. Based on 34 studies from Sweden, Finland and Norway, de Ponti et al. (2012) found that organic yields in this region are 70% of conventional yields. Yield statistics for Sweden from 2015 show that organic cereal yields in that year ranged between 53% (winter rye and winter wheat) and 58% (spring wheat) of conventional yields. Organic leguminous crops yielded 69% (peas) and 87% (field beans) of conventional crop yields and organic leys 87% of conventional yields. These values represent national averages for organic and conventional production, but geographical bias, which is present because there are more organic farms in regions less favourable for cropping, is not accounted for (SS 2016). Supply of nitrogen (N) and control of perennial weeds are two of the most important yield-limiting factors in organic crop production (Askegaard et al. 2011). These are linked, as sufficient N availability for rapid early establishment and growth of crops also has a strong influence on reducing weed infestation, by greater weed suppression ability of the crop (Olesen et al. 2007). Commonly used organic fertilisers such as manure, compost, green manure and organic wastes are low in plant-available N and this, in combination with slow N mineralisation in the spring due to low temperatures, restricts yields in organic crops, especially in the Nordic countries (Dahlin et al. 2005). Yield losses due to pests and disease also affect the organic-conventional yield gap. The number of crop protection products approved for organic agriculture is very limited (EU 2014), and although they constitute an important input for reducing crop losses, especially in some horticultural crops (Letourneau and van Bruggen 2006), the lack of crop protection products or other effective crop protection measures limits organic yields. We should stress here that, although copper-based products are among the most widely used crop protection products in European organic farming and are important for controlling fungus attacks in, e.g. vines, fruit crops and potatoes (Niggli et al. 2016), copper fungicides are prohibited in Scandinavian countries by national legislation. Organic farmers frequently have to rely on plant varieties bred for high-input conventional systems, i.e. high-yielding varieties with e.g. poor weed competitive abilities and shallower rooting depth (Lammerts van Bueren et al. 2011). In conventional production systems, these deficits are rectified by the use of herbicides and inorganic nutrients. To overcome these limiting factors, a number of strategies are available. Niggli et al. (2016) describe many of these for arable crops, summarised here in Table 1. Some of the strategies involve implementation of well-known best practices, e.g. the use of favourable crop rotation design to prevent weed infestation and disease and pest outbreaks. Others require more research and development, e.g. how to manipulate surrounding landscapes to strengthen functional biodiversity and the use of new fertiliser sources, crop protection products and techniques. Furthermore, changes in the EU organic regulations are needed to implement some of the proposed strategies. Concerning horticultural crops, which are susceptible to many pests and pathogens (Letourneau and van Bruggen 2006), new crop protection strategies and development and increased use of a variety of biological control agents (e.g. bacteria, fungi and predatory arthropods) (van Lenteren 2012) will be particularly important to reduce the yield gap. Increased use of resistant varieties is also crucial (Speiser et al. 2006), but these varieties are however not fully resistant, implying that direct crop protection measures will be especially important to secure high yields and product quality in high-value crops. 2.2 Livestock production A study by van Wagenberg et al. (2017) compared different aspects of sustainability including productivity in conventional and organic livestock production systems. For dairy production, seven out of 11 studies showed that organic dairy cows produced 4.7–32% less milk than conventional cows, while three studies did not find a significant difference. Reasons for this yield gap include a longer pasture season, less use of high-yielding breeds and lower levels of concentrate in diets. For beef cattle and laying hens, there are not enough studies available to draw general conclusions on yield differences in these sectors. Regarding broiler chickens in organic production, the use of slower growing breeds compared to the fast growing breeds used in conventional production results in lower yields in term of growth and feed conversion. The high incidences of mortality due to lameness and circulatory problems reported for birds of fast-growing breeds reared in organic production systems with long rearing periods further reduce the net yield (e.g. Wallenbeck et al. 2017; Rezaei et al. 2017). For pigs, productivity is mostly lower in organic production, with higher intake of feed in organic sows and a lower number or weaned piglets per sow. In Northern Europe, it is common practice to use the same high-yielding breeds in organic production as in conventional animal production. Hence, the genetic yield potential is the same in both systems. However, these breeds are developed in conventional production environments, so the genetic potential may not be realised to the same extent when environmental factors such as diet composition, housing or disease pressure change. For example, due to less intensive feeding strategies, including large forage allowances and pasture grazing, yields in organic ruminant production are generally lower than in conventional production. In many cases, the most high yielding breeds in conventional environments are also the most high yielding breeds in organic environments. In cases where genotype by environment interactions exist, e.g. indications of such interactions for fertility traits have been reported in studies comparing organic and conventional dairy production in Sweden, (Ahlman et al. 2011; Sundberg et al. 2010), the difference between various production environments may be lower. If existing genotype by environment interactions are not taken into account in the choice of breeds it can have severe effects on the yield in organic animal production (Wallenbeck 2009; Ahlman 2010). However, for dairy farming in Sweden, organic yields are only slightly lower than conventional (9321 compared with 10,222 kg energy-corrected milk (ECM) per cow and year) due to similarities in the systems, e.g. high forage ratios in both (VS 2017). Organic and conventional beef production systems show similar results in terms of yield, for the same reason. In Swedish pig production, slaughter weights are 1–5% lower in organic production, but feed consumption is also higher (Wallenbeck 2012). Reports (although scarce) show increased piglet mortality and decreased sow productivity per year in organic herds compared with conventional herds (Wallenbeck et al. 2009). Organic hens (and other free range hens) are usually less efficient in terms of feed conversion ratio (minus 2–20%) compared with laying hens housed in aviaries or in cages and commonly show increased mortality due to injury and disease. In organic broiler systems, net yield is also significantly decreased, for similar reasons (Rezaei et al. 2017). For pigs and poultry, the ban on synthetic amino acids in organic systems reduces the yield potential of the conventional hybrids (Eriksson et al. 2010b). Animal health is a key factor influencing the net yield of any livestock production system. In some regards, organic livestock systems perform better than conventional systems, e.g. respiratory diseases are usually lower in organic herds (Hansson et al. 2000). In other areas, problems are more severe in current organic systems. For example, in poultry production morbidity due to parasite infections (i.e. coccidia and nematodes) is a problem, as organic regulations restrict the use of prophylactic medication, which is common in conventional production (Thapa et al. 2015). In pig production, there is an elevated risk of joint lesions in free-range pigs (Etterlin et al. 2014). The therapeutic medications used in organic farming are identical to those used in conventional farming regarding antibiotics and anthelmintics, but the extended withdrawal times required in the organic regulations makes their usage less likely. In some countries, alternative medications (homoeopathic therapy or phytotherapeutic) are used. However, homoeopathy and phototherapy are not widely used in Sweden, as veterinarians are only permitted to prescribe therapeutic methods that are evidence-based. Strategies to increase yield in organic livestock production that are common to all species include improved management, especially the use of optimal livestock diets, decreased mortality rates due to injury and disease and improved breeding that matches the requirements of organic production and the production environments for the animals in organic herds. Table 2 summarises species-specific strategies based on van Wagenberg et al. (2017). Table 2 Strategies to increase yields in organic livestock production that are applicable to Northern Europe. Based on van Wagenberg et al. (2017) 3 Risks and opportunities associated with increasing yields in organic production 3.1 Biodiversity The expansion of agricultural land, the decline in landscape heterogeneity, increased use of fertilisers and pesticides and conversion to systems with reduced crop diversity have had major effects on global biodiversity (Emmerson et al. 2016). Organic farming generally increases crop and landscape heterogeneity compared with conventional farming, which enhances biodiversity. For example, overall species richness on organic farms is on average 34% (95% CI: 26–43%) higher than on conventional farms, according to one meta-analysis (Tuck et al. 2014). However, the magnitude of the positive effects varies widely among organism groups, e.g. for pollinators and predators species richness is 50% (95% CI 27–77%) and 12% (95% CI 1–24%) higher, respectively, on organic farms (Tuck et al. 2014). The positive effects also show large variation across landscapes, e.g. with lower effects in more diverse landscapes (Winqvist et al. 2012). However, the benefits of organic production for biodiversity have been shown to be greatest at field level in some cases, while gains at farm or landscape level may be smaller (Rundlöf et al. 2010; Schneider et al. 2014). Some practices for increasing yields in organic crop production carry a risk of attenuating the current positive effects on biodiversity. For example, higher frequency of mechanical weeding affects floral abundance in fields (Fig. 2), and may potentially decrease the density and species richness of organisms at higher trophic levels, such as arthropod generalist predators (Diehl et al. 2012). However, restoration or conservation of refuge areas in field margins and habitats adjacent to arable fields may counteract negative effects on diversity at farm level and increase yields through provisioning of habitats for a number of organisms important for biological control (Benton et al. 2003; Rundlöf et al. 2010). Habitat manipulations aim to increase biodiversity locally or regionally by providing shelter and/or feed for natural enemies and pollinators, which otherwise have little chance of survival in less complex landscapes and low-diversity agro-ecosystems (Gurr et al. 2017). Evidence is mounting that habitat manipulation approaches, e.g. flower strips, can be effective when applied at realistic scales (Tschumi et al. 2016; Gurr et al. 2016), and practical implementation of such techniques is slowly increasing around the world (Gurr et al. 2017). However, more research is needed on the design of diversity-promoting elements on farms and in the agricultural landscape. Ideally, these should not reduce productive areas, which will be particularly challenging in landscapes dominated by arable fields. More precise guidelines and specific standards for biodiversity conservation would be beneficial in organic regulations, but they must allow flexibility in relation to site-specific conditions. Fig. 2 Mechanical weeding (a) is effective for removing weeds, but negatively affects floral abundance in fields (b), which may also decrease the density and species richness of other organisms Greater inputs of nutrients aimed at increasing organic yields and giving denser crops may also negatively affect diversity (Flohre et al. 2011; Gabriel et al. 2013). However, if inputs are applied with greater precision, this is likely to enhance yields and reduce nutrient losses and runoff, which will be positive for biodiversity due to reduced eutrophication of surrounding ecosystems (Cunningham et al. 2013). Increased use of chemical crop protection agents approved for organic production, which promote high yield levels and yield stability, may concurrently have negative effects on a range of organisms in the field and the agricultural landscape. Nine chemical pesticide substances are currently approved for use in organic agriculture in Sweden and some of these have known negative effects on non-target organisms. Most notably, the use of pyrethrins, a plant extract approved as an insecticide in organic agriculture, poses risks for aquatic invertebrates. On the other hand, so-called ‘basic substances’Footnote 1 (Marchand 2015) are generally of low concern for biodiversity. Other direct crop protection methods include augmentative biological control based on the release of microorganisms (Glare et al. 2012; Lacey et al. 2015) and macroorganisms (van Lenteren 2012) (e.g. antagonistic fungi or parasitoid wasps). Their increased use is likely to be accompanied by no or small negative effects on biodiversity. While the literature reports great potential for such control methods, their actual use is still limited, except in high-value crops (van Lenteren 2012). Organic farmers today largely rely on plant varieties bred for high-input conventional systems. Future breeding for both increased yields and genetic diversity includes better adapted and genetically diversified crops for organic farming. This can increase yields through incorporation of multiple traits such as weed competitive ability, disease resistance and high nutrient uptake efficiency (Lammerts van Bueren et al. 2011). Furthermore, varieties selected under organic or low-input conditions have been shown to perform better in variety testing in organic environments, even if this is not always the case (Mikó et al. 2017). A selection made under stress may result in more competitive lines adapted to, e.g. lower levels of available nutrients, which is often the case in organic systems (Kirk et al. 2012). Although better plant varieties can bring multiple advantages, breeding strategies aimed at reducing weeds risk reducing in-field diversity just like other measures e.g. mechanical weeding. In Europe, the abandonment, rather than the expansion, of agricultural land poses a serious threat to many endangered species that have adapted to landscapes shaped by traditional low-intensity farming practices (Queiroz et al. 2014). Grazing and traditional methods of forage harvesting of semi-natural pastures are therefore important strategies for preserving a varied agricultural landscape with high biological and cultural values in many countries in Northern Europe (Luoto et al. 2003; Kalamees et al. 2012). The use of higher proportions of concentrate feeds in beef and dairy diets decreases time spent grazing, with associated negative consequences for pasture maintenance and for resource efficiency in terms of roughage conversion (i.e. kg milk or meat per kg roughage) (Weibull and Östman 2003). In addition, beef breed bulls may be preferred over steers (castrated male offspring) when aiming at increasing yields, due to bulls’ greater potential for more rapid growth. However, bulls normally only graze during their first summer in Sweden due to safety of workers and the general public and for economic reasons, whereas steers normally graze for two or three summers (Hessle and Kumm 2011). Developing multifunctional and mixed animal production systems by, e.g. combining high-yielding dairy cows with breeds suitable for grazing, or using dual-purpose breeds, could also contribute to conserving biodiversity. More effective agri-environmental schemes that steer production in this direction need to be developed and also implemented in organic regulations. 3.2 Emissions of greenhouse gases The climate impact from agriculture in Northern Europe arises mainly from emissions of nitrous oxide (N2O) from soils, driven largely by N application (44% of GHG emissions from Swedish agriculture), carbon dioxide (CO2) emissions from organic soils (12%), methane (CH4) from enteric fermentation in ruminants (26%) and emissions of N2O and CH4 from manure management (5%) (SBA et al. 2012). Fossil energy use in field machinery and animal housing add to these emissions, but to a lesser extent (10% of GHG emissions). In conventional agriculture, the production of mineral fertilisers is also a considerable source of GHG. Although organic farming does not use energy-demanding mineral fertilisers, the production and transport of some organically acceptable fertilisers require non-negligible amounts of fossil energy input, while GHG emissions can also arise during storage (Spångberg 2014). The yield level is influential when calculating the climate impact per unit product, as the GHG emissions from soils and inputs are distributed over the total output (Röös et al. 2011). Therefore, organic products are frequently assessed as having similar or larger climate impacts per unit product than conventional products, as the lower GHG emissions from avoidance of mineral fertilisers and other inputs are cancelled out by the lower yields (Clark and Tilman 2017). For N2O emissions specifically, Skinner et al. (2014) showed that for yield gaps larger than 17%, N2O emissions are higher for organic products than for conventional products. Hence, there is an opportunity to combine increased yield in organic agriculture with reduced climate impact if yield increases can be achieved with no or low increases in GHG emissions from fields and inputs. However, measures taken to increase yields in organic agriculture have complex effects on the climate impacts of production. As described below, there are ways to increase yields that have clear co-benefits for reducing the climate impacts. However, the climate effects of increased yield are often not so clear-cut and the overall balance between increasing and decreasing GHG emissions depends on local conditions and can only be determined on a case-by-case basis. Increased used of mechanical weeding increases CO2 emissions as a result of fossil fuel combustion. However, the climate impact from farm machinery use is usually a minor part of the climate impact of production (Röös et al. 2011), so the increase in yield from weed control can often compensate climate-wise for the increased fossil fuel use. Apart from reducing GHG losses from organic agriculture, increased use of renewable resources in organic agriculture is in line with organic principles. Biogas production from agricultural residues and/or manure is beneficial from a climate perspective, as it provides renewable energy (Kimming et al. 2015; Siegmeier et al. 2015). Yields can also increase, as the anaerobic digestion process increases the plant availability of N in digestate used as fertiliser. In the future, there will be new opportunities for increasing yields through more intensive machine use without increasing GHG emissions, by a transition to electric machinery in combination with renewable electricity. Livestock diets with a higher proportion of concentrate feed increase milk yields and growth rates, and thus reduce methane emissions per unit product. However, emissions from feed production, including soil carbon sequestration or losses, influence the total climate impact of production. Sequestration is generally larger in ley cultivation than for annual crops (Poeplau et al. 2015). Production of a ruminant diet with a large proportion of forage can therefore lead to greater sequestration (or lower losses) of carbon in soils than a diet based on more grains and concentrates. The joint production system of dairy, which produces both milk and meat, constitutes an important exception to the rule of thumb that increased yields reduce the climate impact. When milk yields increase, the amount of meat from the dairy system decreases, as fewer cows are needed and hence fewer calves are born. If this ‘lost’ meat is replaced by beef meat from suckler herds, the total climate impact from milk and meat production increases, as the climate impact of beef from suckler herds is higher than that from dairy. Hence, the total climate impact from milk and meat production can be lower with lower-yielding dairy cows (Flysjö et al. 2012). In summary, there are many examples of how increasing yields can lead to decreased climate impact, but the examples from ruminant production illustrate how important it is to consider the climate impact from agricultural products from a systems perspective on a case-by-case basis, to avoid sub-optimisation. Moreover, estimation of climate impacts is hampered by the large variability in biological systems, including the large uncertainties in measuring or modelling N2O emission rates and soil carbon dynamics (Nylinder et al. 2011; Powlson et al. 2011). 3.3 Nutrient losses Loss of N and phosphorus (P) from agricultural systems to waterways is a serious problem causing eutrophication, particularly in coastal areas. Agriculture is also the main contributor to airborne NH3 emissions, mainly from manure management (SBA et al. 2012). Increased inputs of nutrients, especially N, have great potential to increase yields in organic farming (Doltra et al. 2011). However, there is an increased risk of nutrient losses with higher N inputs that needs careful consideration. The risk is greatest when N released from organic fertilisers does not match crop uptake or when N fertilisation rates start to approach or exceed the ‘economic optimum level’, calculated from known yield response to N mineral fertilisation (Delin and Stenberg 2014). Above the optimum, the yield response ceases and N leaching losses increase exponentially (Fig. 3). Currently, N inputs in organic crop production are often well below the optimum level (SS 2017). Simulations show potential to increase yields through additional use of manure or other organic fertiliser inputs, without negative effects on N leaching (Doltra et al. 2011). Careful management of animal manure to minimise NH3 losses is also crucial, including the use of covers on manure storage facilities and precision spreading. Bandspreading in growing crops and direct incorporation of manure in soils minimises NH3 emissions, increases N use efficiency and raises yield levels (Webb et al. 2013). One of the main sources of N in organic systems is biological N fixation by annual and perennial legumes. The risk of N losses may increase with a large proportion of legumes in the crop rotation, as it is challenging to synchronise timing of N release with crop requirements (Olesen et al. 2009). For example, incorporation of N-rich crop residues in autumn before, e.g. sowing of winter cereals increases the risk of leaching, due to high N mineralisation in autumn often exceeding crop N uptake (Torstensson et al. 2006). Appropriate management practices may reduce such risks. Askegaard et al. (2011), Doltra et al. (2011) and Plaza-Bonilla et al. (2015) found potential for catch crops, i.e. crops grown between main crops with the purpose of taking up residual available nutrients, mainly N, in soil, to reduce N losses and release N to the main following crop. In Nordic long-term field trials at different sites, catch crops have improved mean grain yields, corresponding to 0.2–2.4 Mg DM ha−1 for spring oats and 0.1–1.5 Mg DM ha−1 for spring barley (Doltra and Olesen 2013). Spring tillage on suitable soils is another efficient strategy to decrease N leaching losses during the winter season (SMED 2015). However, on clay soils, in combination with cold conditions early in the growing season, such a measure could reduce N mineralisation rates, negatively affecting crop N availability in spring and early summer and leading to lower yield. Using genetically diverse crops, including intercrops and variety mixtures, that have the potential to perform well under different environmental conditions also minimises the amount of residual available nutrients in the soil (Wolfe et al. 2008). Some nutrient losses are however inevitable. Therefore, using vegetation zones, wetlands, sedimentation ponds and other measures in the landscape to protect vulnerable waters through capturing lost N and P is crucial. Such strategies need to be incorporated into organic regulations to prevent eutrophication from organic agriculture, especially if nutrient inputs are increased. The availability of N and other nutrients in forms approved for organic production is already limited. Due to the inevitable losses, both increased recycling of nutrients from society and ‘new’ nutrients will be needed for organic yields to increase and organic agriculture to expand. If organic regulations are modified to allow general use of biogas digestate from e.g. food and slaughter waste and/or human urine, which has high levels of plant-available N, this gives opportunities for increased recycling and more precise timing of N fertilisation. This in turn can improve N use efficiency and yield levels and potentially reduce N losses (Salomon and Wivstad 2013). Some suggest that restricted use of mineral N fertilisers produced by renewable energy (Tallaksen et al. 2015) may be an interesting option to consider as a way of providing ‘new nitrogen’ that could be supplied with high precision. This is currently far from being allowed in organic regulations and challenges the basic principle that organic farming relies on, i.e. feeding the soil rather than the plant. It also feeds a model of organic farming which is about input substitution rather than system redesign. A potentially less controversial option may be source separation of human wastes, such that diluted urine could be used in a precision N fertiliser context. There are currently proposals to allow P fertilisers such as struvite which are derived from human waste (EC 2016). If organic yields are to increase and organic agriculture to expand substantially novel approaches to nutrient supply are unavoidable. Increased use of concentrate feeds in organic livestock production to increase yields risks leading to increased amounts of nutrients in the manure and an increased risk of subsequent nutrient losses, especially of NH3 (Oenema et al. 2007). Due to the ban on synthetic amino acids in organic production, pigs and poultry are often overfed with protein (+ 5–10% of crude protein in laying hens) in order to reach sufficient levels of certain amino acids in the feed (van Krimpen et al. 2016). This may lead to increased N losses to the environment, but the impact varies substantially from farm to farm (Degre et al. 2007). The need to overfeed pigs and poultry could be reduced by the introduction of novel protein feeds such as mussel meal (Jönsson et al. 2011) and insects (Khusro et al. 2012) and/or removal of the ban on synthetic amino acids in future organic regulations. Re-coupling of animal production and production of feed (Garnier et al. 2016), accompanied by development of new business models and partnerships between organic farmers (Asai and Langer 2014), has been proposed as an option to reduce N losses. Such integration could increase yield levels in organic arable crop production due to greater access to nutrients (Doltra et al. 2011). Crop-livestock integration with ruminants also introduces leys on arable farms, promoting crop yields through increased soil fertility and reducing the risk of N and P losses (Aronsson et al. 2007). 3.4 Soil fertility Agricultural soils are affected by many anthropogenic pressures, such as loss of soil organic carbon (SOC), nutrient depletion, soil compaction and heavy metal deposition (Smith et al. 2016). In Northern Europe, however, the situation is not as severe as in some other parts of the world. In Sweden, cropland topsoils have an average organic matter content of 4% (albeit with high variation), which is considered sufficient to maintain soil fertility for crop production (Eriksson et al. 2010a). A high SOC level is a key characteristic of soil fertility, as it promotes soil structure, aeration, water-holding capacity, chemical buffering capacity, soil microbial activity, plant root development and continuous release of plant nutrients through mineralisation. According to a global review by Gattinger et al. (2012), the results indicated that soils in organic cropping systems have significantly higher levels of SOC than those in conventional systems. Tentative explanations include increased external carbon inputs, organic matter recycling and extended crop rotations with forage legumes in organic systems. Increased yields lead to increased amounts of crop residues being incorporated into soils, raising SOC levels (Diacono and Montemurro 2010). Increasing fertiliser inputs to increase yields reduces the risk of depletion of a range of essential soil nutrients. This is particularly important in organic stockless systems and in systems with small or no external inputs of fertilisers (Watson et al. 2002). Increased use of fertilisers with high nutrient availability, e.g. biogas digestate, or future introduction of renewable mineral fertilisers in organic farming could provide the potential to increase yields through increased precision in fertiliser application. However, such fertilisers may not contribute to SOC building to the same extent as fertilisers rich in organic carbon. Practices typical of ‘conservation agriculture’, including diversified crop rotations, maximum soil cover and reduced tillage, contribute to reduced soil degradation (Cooper et al. 2016). However, implementation of reduced tillage is limited in organic agriculture, mainly because of the important role of tillage for control of weeds. Shallow inversion tillage at strategic stages in the crop rotation could be a good compromise to ensure both effective weed control and SOC gains (Cooper et al. 2016). A concern for soil fertility associated with spreading of liquid fertilisers, as well as mechanical weeding, is the risk of soil compaction. The development of lighter machinery for mechanical weeding (e.g. self-driving weeding robots), fertiliser spreading through pipelines and processes for reducing the water content in liquid fertilisers will help to reduce this problem. As discussed in Section 3.3, nutrient recycling within the food system needs to be improved to maintain long-term sustainable nutrient supply and there are several promising options (Oelofse et al. 2013). However, urban waste products may contain a number of contaminants, including heavy metals, e.g. Cd, which is of great concern for public health (Åkesson et al. 2014). New techniques are needed for safe recycling systems, e.g. by source separation of sewage (Spångberg 2014). There are also various technologies to recover P from wastewater and sewage sludge by crystallisation or precipitation, with reduced risk of contamination compared with untreated sewage sludge. Treated sewage sludge products may have higher quality concerning contaminants than fertilisers approved in current organic regulations, such as natural phosphate rocks or even animal manures (Wollman and Möller 2015). Closing the nutrient loop is one of the major sustainability challenges for agriculture going forward. However, as current organic regulations hinder the use of many urban waste products, organic agriculture is actually less progressive in this area than conventional agriculture. As described in Section 3.2, higher proportions of concentrates in livestock diets to increase livestock yields require more annual cropping, which risks less SOC formation compared with leys (Freibauer et al. 2004). The importance of including clover/grass ley in the crop rotation for preserving carbon stocks in soils is demonstrated in Swedish monitoring datasets by higher organic matter content in soils on dairy farms than on pig farms which mainly grow annual crops (Eriksson et al. 2010a). Consequently, in order to increase yields in production systems with ruminants, increased forage quality through, e.g. optimising ley harvesting times (Nadeau et al. 2015) would be more favourable for promoting soil fertility than introducing higher concentrate proportions. 3.5 Animal health and welfare There have been enormous increases in livestock productivity in recent decades. In Northern Europe, yields in pig production and milk yield per dairy cow have approximately doubled since the 1960s. The division of the domestic hen into egg-laying breeds and meat-producing broiler breeds has increased poultry productivity dramatically (Appleby et al. 2004). However, modern industrialised livestock production systems affect the health and welfare of farm animals in many ways, including health problems related to breeding for high productivity (e.g. leg problems in broilers, high piglet mortality in pork production due to smaller and less vital piglets and mastitis in dairy cows) and limitations on animals expressing their natural behaviour due to being reared in confined and barren environments (e.g. restriction of movement due to crating of sows and the development of injurious behaviours such as tail biting in pork production and feather pecking in poultry) (von Keyserlingk and Hotzel 2015). Continued breeding for high growth rates, without taking other important breeding traits such as animal health and behaviour into account, and the use of these breeds in organic production risk aggravating current health problems further. For example, there is little or no difference in cow health between organic and conventional dairy systems in Sweden (Fall et al. 2008; Sundberg et al. 2009) due to the small differences in production system, i.e. same breeds and similar yield levels. Hence, Nordic organic dairy systems are among the most high-yielding dairy systems globally, but this comes at a price. Dairy cows commonly suffer from udder health disturbances and locomotion disorders; in 2013/2014, 26% of Swedish dairy cows were treated for some medical condition, although breeding in Sweden combines production, health, fertility and longevity traits into a ‘total merit index’ (Oltenacu and Broom 2010; Rodriguez-Martinez et al. 2008). Joint lesions arise in all pig production systems, but they are more frequent and severe in organic compared with conventional production due to higher stress on pig joints in spacious and outdoor environments, as the leg conformation of modern, fast-growing pigs is not suited to the level of exercise required with large space allowances (Engelsen Etterlin et al. 2015). Hence, it is worth discussing whether still higher yields per animal are desirable and in line with organic principles; attention should perhaps focus on improving animal health and welfare at current production levels or even accepting lower yield per animal if necessary. The development of more suitable breeds should be considered, possibly using or crossbreeding with smaller or indigenous breeds possessing traits favourable for animal health and behaviour in the local environment. However, there are short-term solutions that can be implemented in current organic livestock systems in Northern Europe to improve welfare and increase yields and there are several examples of clear synergies in this area. One example is the use of more suitable breeds that are available internationally today. The use of slower-growing breeds in broiler production could improve animal health and also increase net yield at flock level, due to more appropriate behaviour leading to an increased number of broilers being healthy at slaughter compared with fast-growing breeds (Rezaei et al. 2017; Wallenbeck et al. 2017). The implementation of management practices that lead to healthy animals with high fertility and without behavioural disturbances would also contribute to higher yields at herd level and naturally improved animal welfare. For dairy systems, such practices include increased milking frequency, extended calving interval (Österman 2003) and the use of methods for reducing parasite infestation (Höglund et al. 2013). For pigs, pasture and roughage allowances allow natural foraging behaviour and decrease aggressive interactions between pigs (Høøk Presto 2008), although pigs kept on pasture are more susceptible to diseases caused by parasites (VKM 2014). Designing sow and piglet housing to allow sows and piglets to communicate and behave in an optimal way, and herdsmen to care for weak piglets, is essential for reducing piglet mortality and thus improving yields at herd level. Selection of sows with suitable maternal abilities in terms of milk production and maternal behaviour is another key factor (Wallenbeck et al. 2009). Hygiene measures in houses and rotation of outdoor areas are important for all livestock species. Such measures have proven effective for, e.g. organic poultry by the low prevalence of salmonella (Wierup et al. 2017). Improved and well-balanced livestock diets to raise yields can also improve animal welfare by e.g. preventing injurious behaviour and avoiding nutrient deficiencies. For example, problems with feather or vent pecking in laying hens can be reduced by feeding an optimal diet with e.g. high-quality protein and roughage allowances (Rodenburg et al. 2013). If future organic regulations were to allow supplementation of essential amino acids in livestock diets, that would be a major advantage, allowing avoidance of over-feeding (Eriksson et al. 2010b; Leenstra et al. 2014). Improved utilisation of the protein available in roughages is another route, which would improve pig welfare through enabling foraging behaviour, reduce injurious behaviour and thereby potentially decrease the risk of disease (Presto et al. 2013; Wallenbeck et al. 2014). Ruminants are adapted to a forage-based, fibre-rich diet and feeding high levels of concentrate may lead to metabolic diseases (Jorgensen et al. 2007). However, as organic regulations mandate high levels of forage in ruminant diets, the risk of such problems in strategies to increase yields involving higher concentrate proportions in organic ruminant diets is small. Improved forage quality makes it possible to use high proportions of forage (e.g. 60–70% of total dietary dry matter) even for high-yielding cattle (Patel 2012; Nadeau et al. 2015; Johansson et al. 2016). Cows fed forage-based diets up to seven lactations showed no negative development in terms of production efficiency with age, older cows were even able to ingest more (Grandl et al. 2016). As cows in this case also lived longer, it could be argued that also welfare increased. Therefore, there is great potential to maintain or even increase milk yields and growth rates using forage-dominated diets that also improve cattle welfare. However, if time spent grazing decreases as a consequence of changed feeding regimes to increase yields, this will also negatively affect ruminant welfare; significant benefits for animal health, fertility and farmer profitability have been found for grazing systems compared with year-round indoor systems (Ekesbo 2015). One way to achieve increased yields in beef production with few negative impacts on animal welfare is to use dairy/beef cross-breed animals, which would allow dairy cows to produce calves with the good growth potential of the beef breeds. The cross-breed calves could be raised as heifers or steers in grazing systems, providing potential synergies for yields, biodiversity conservation (Section 3.1) and animal welfare. 3.6 Human nutrition and health It is well known that the input levels of plant nutrients affect plant development and composition (Bindraban et al. 2015; Wiesler 2012), as well as crop yields. To some degree, yield and nutritional quality may be divergent breeding goals (Morris and Sands 2006), since historically, the breeding and production of high-yielding varieties has led to a decreasing content of certain minerals in some vegetable and cereal crops (Marles 2017). The production system, organic or conventional, generally has no or only a small effect on the concentrations of most nutrients and secondary metabolites in crops. The exception to this is phenolic compounds, where various meta-analyses report an overall modestly higher concentration (14–26%) of total phenolics in organic crops (Mie et al. 2017). Increased N fertilisation has a negative effect on the concentration of phenolic compounds in crops (Treutter 2010). Phenolic compounds from plant sources are believed to carry benefits for human health, although this is not fully understood (Del Rio et al. 2012). Based on current knowledge, it is not possible to derive any specific health benefit from the slightly higher concentration of phenolic compounds in organic crops. Accordingly, increasing yields in organic farming by increasing crop fertilisation is not expected to lead to nutritionally relevant effects on crop composition. In a 2-year controlled field trial examining the composition of white cabbage using untargeted metabolomics, measuring approximately 1600 compounds, researchers were able to discriminate between cabbage from organic and conventional production, but not between cabbage from one low-input and one high-input organic system (Mie et al. 2014). Therefore, intensifying organic crop production within the range of current organic fertilisation practices is not expected to lead to major changes in plant composition. The use of chemical pesticides is strongly restricted in organic production. Limited data indicate that toxicity-weighted human dietary pesticide exposure from organic foods in Sweden is far lower than exposure from conventional foods (Beckman 2015), and the associated health risks are small. However, 10 compounds with some type of identified human toxicity are currently approved in organic crop production in the EU (Mie et al. 2017), and increased inputs of these compounds, which are likely to lead to increased human exposure, are per se undesirable. Conversely, increased inputs in the form of ‘basic substances’ are regarded to be of low concern for human health (Marchand 2015). Likewise, the use of microorganisms, macroorganisms or habitat manipulation in plant protection is not associated with any known risks for humans. Lowering the crop pest and disease burden by good management could in some cases result in lower concentrations of some plant defence compounds that are expressed in response to infestation. However, there is no convincing evidence that this effect is relevant for human nutrition. For cereal crops, deoxynivalenol (DON) is an important fusarium toxin and a common cause of cereal crop losses due to maximum limits for food being exceeded. DON exposure is close to or higher than the tolerable daily intake (TDI) for certain subpopulations in Europe (EFSA 2013). On average, organic cereals have lower DON levels than conventional cereals (Smith-Spangler et al. 2012). Increasing yields through higher N fertilisation is likely to lead to increased DON concentration in cereal crops. On the other hand, increasing marketable yields by counteracting fusarium infestation, through management practices such as suitable crop rotation, incorporation of crop residues in soils, choice of cultivar and proper drying and storing of cereals after cropping, should lead to decreased DON concentration in the crop (Kabak et al. 2006). In a recent review (Bedoussac et al. 2015), cereals in cereal-legume intercropping systems had a higher (0.33 compared with 0.27 kg m−2) and more stable grain yield than the mean of partner crops grown as sole crops under the same conditions. Cereal intercrops also had a higher protein content compared with sole crops (11.1 compared with 9.8%), while the legume protein content was not affected by intercropping. In animal feeds, most ingredients in concentrate feeds, such as cereals, contain less than 10% omega-3 fatty acids of total fatty acids, while grass and red clover contain between 30 and 50% omega-3 fatty acids (Woods and Fearon 2009). Omega-3 fatty acids are a group of fatty acids that are essential to humans and, in general, increased human intake is desirable (Burdge and Calder 2006). The fatty acid composition in feed largely determines the fatty acid composition of milk or meat, although this relationship is not linear for ruminants (Khiaosa-Ard et al. 2015; Woods and Fearon 2009). Consequently, higher inputs in the form of concentrate feeds are likely to negatively affect the omega-3 fatty acid content of the product; on average, organic cow milk has 56% (95% CI 38, 74%) higher concentrations of omega-3 fatty acids (Średnicka-Tober et al. 2016b). A similar, plausible, although less well-documented relationship appears to exist for meats (Średnicka-Tober et al. 2016a). The nutritional consequences are likely to be small, as studies from various European countries indicate that dairy products on average contribute 5–16% and meat 12–17% of the total omega-3 fatty acid intake in human diets (Mie et al. 2017), although this contribution may be higher for certain dietary patterns. A modest increase in concentrate feeds in organic animal production is therefore not expected to lead to a substantial decrease in omega-3 fatty acids in the human diet. Measures to improve animal health in general to avoid yield losses due to animal diseases could lead to lowered pathogen levels in e.g. poultry meat. 3.7 Farm profitability The profitability in organic production varies considerably between products, regions and farms. However, many studies have concluded that organic farms are frequently more profitable than conventional farms due to higher price premiums, government support and/or lower costs (Nemes 2009). In a recent meta-analysis, Crowder and Reganold (2015) found that without price premiums organic farming would be significantly less profitable than conventional agriculture due to 10–18% lower yields, showing the importance of price premiums for profitability in organic farming. For the farmer, the economic effect of increased yields in organic agriculture will depend on how the revenues of the farming business are affected, including how consumers respond to such changes and the costs associated with achieving increased yields. The profitability of organic farming hence strongly depends on consumers being willing to pay a price premium. Crowder and Reganold (2015) found that a premium of 5–7% is required in order for the profits in organic farming to equal to those in conventional farming, while the actual premium is around 30%. Reasons for buying organic food include health and nutritional concerns, perceived superior taste, environmental and animal welfare concerns and distrust in conventional food production (Hoffmann and Wivstad 2015). Although higher yields per se do not necessarily affect demand, a change towards more intense practices in organic farming, making it more similar to conventional farming in some respects e.g. by increased use of fertilisers and concentrate feeds, may negatively affect the premium some consumers are willing to pay for organic food (Adams and Salois 2010). Furthermore, increased yields would presumably lead to a larger supply of organic products, which if not matched with a corresponding increase in consumer demand would result in a reduction in prices. In countries where organic production receives government support, another potential risk to farm revenues of increasing yields is that it may be used as an argument for removing subsidies. Improving productivity generally requires investment in additional capital (e.g. machinery or additional land) and/or labour (e.g. increased mechanical weeding) which may increase the financial risk of the farmer. Hence, increased yields may not be preferred by all farmers, although some studies have found organic farmers to be less risk-averse than conventional farmers (Gardebroek 2006) and intensification may reduce the yield variation. Variations in yield, and hence in economic returns, between organic farms have been partly explained by differences in management and marketing skills. Experience and knowledge influence farmer behaviour. For example, a flexible approach to crop rotations on organic farms in Sweden has been found to be positively correlated to the experience of the farmer (Chongtham et al. 2016). Knowledge transfer between farmers is important in improving management skills and the ability of farmers to apply best available management practices. Yield increases which depend on investments in costly specialist machinery (e.g. for mechanical weed control) may create incentives for more extensive cooperation in sharing machines. Adoption of new technologies is becoming easier and less costly as the technology becomes more widespread. Thus, more widespread uptake of good organic practices will promote yield increases (Läpple and van Rensburg 2011). This stresses the importance of effective communication channels for knowledge sharing and transfer in improving yields and productivity in organic farming. 4 Summary and reflections Table 3 summarises the most likely areas of conflict and synergies associated with different ways of increasing yields in organic agriculture identified in this review. Table 3 Risks and opportunities in different areas associated with different strategies to increase yields in organic production This review shows that in most areas, there are both risks and opportunities associated with strategies to increase yields in organic production. However, increased N inputs have many risks and few opportunities for synergies, whereas for reduced losses only opportunities, and no risks, were identified (Table 3). The final outcome depends largely on management, i.e. how strategies to increase yields are implemented and whether trade-offs are accounted for and managed. Knowledge, skills and system thinking are crucial in this endeavour, as we demonstrate with numerous examples. The ambition of organic farming to design high-yielding farming systems that also care for the environment, people and animals entails a difficult value-based balancing act. Although not discussed here, farming systems also need to be resilient and, inherently, resilient systems include redundancy, which might counteract resource efficiency (Bennett et al. 2014). In some aspects, current organic agriculture delivers benefits compared with conventional agriculture (Seufert and Ramankutty 2017). We note in this review that to safeguard advantages of organic farming, such as biodiversity conservation and lower nutrient losses per unit area, if strategies to increase yields are implemented, it cannot be assumed that just following current EU regulations on organic agriculture will be sufficient. Strategies to counteract possible negative consequences of yield increases will be needed at farm level and these are not currently mandatory or regulated and seldom attractive to farmers. For example, when in-field diversity decreases due to improved weed management and crops become denser due to increased fertilisation, then it will be important to implement strategies that promote biodiversity outside fields or adjacent to fields, in order to maintain biodiversity at landscape level. As for counteracting potential increases in nutrient losses from increased fertiliser use, management strategies such as precision application of fertilisers, the use of catch crops, timely tillage and optimal design of crop rotations and nutrient filters in the landscape will be needed. The implementation of such measures, and application of nutrients at doses not exceeding optimal levels, have to be guaranteed in some way. One of the most important yield-limiting factors in current organic crop production, if not the most important, is the availability of plant-available nutrients. At the same time, a crucial characteristic of sustainable food systems is safe recycling of nutrients from society. According to its principles, organic agriculture should rely on local resources and recycling, so ideally organic agriculture should be the driving force for the implementation of circular food systems. Unfortunately, EU regulations hinder such development through a ban on returning human wastes to land, due to contamination risks by e.g. environment pollutants and drug residues. However, in Sweden, it is possible to use certified digestate from biogas production based on food waste and slaughter waste in organic farming. This is attractive for biogas enterprises, as it is a way to increase the value of the digestate. Estimations by Spångberg (2014) show large potential of urban wastes for nutrient supply in agriculture; the P content in total urban wastes in Sweden was about 80% of total amount of mineral P used on agricultural land in Sweden 2016 (SS 2017). The return of different kinds of urban wastes, e.g. human excreta, food waste, by-products from food industry, to agricultural land, and the ability to overcome social and environmental barriers to this, need further development. We believe that organic agriculture could play an important role here; there are numerous technologies that can be applied to separate nutrients in human excreta from unwanted substances, enabling safe and trusted recycling of nutrients from food consumers back to agriculture (Bloem et al. 2017). Apart from recycling nutrients, ‘new’ nutrients will also be needed to compensate for inevitable losses from fields and manure or other organic fertilisers. Currently, only legumes are allowed to provide ‘new’ nitrogen in organic agriculture. However, we would encourage the organic movement to also consider (after careful evaluation) the use of mineral nitrogen fertilisers made from renewable sources (Tallaksen et al. 2015), as we believe these can comply with organic principles and offer benefits in certain cropping systems, e.g. in horticulture systems with drip irrigation. Organic principles stipulate good care of animals and EU regulations reflect this with requirements for e.g. outdoor access, regulated slaughter ages and larger space allowances. However, current organic livestock production systems in Northern Europe commonly use high-yielding breeds, with their associated welfare problems, in systems managed according to organic regulations. The animals are not always adapted to these systems, hence introducing additional health problems (although allowing for more natural behaviours). Slightly provocatively, one can say that such organic livestock production systems are trying to “have their cake and eat it”. Although it is clearly good to increase animal welfare and health in current livestock systems (and hence also improve resource efficiency and yields on herd/flock level), we ask whether in the long term it is a cul-de-sac to work on closing the yield gap between organic and conventional systems that both use the same high-yielding breeds often connected to health concerns. An alternative for the organic sector could be to implement truly alternative livestock systems, by introducing other breeds adapted to organic production conditions, e.g. more robust pig and poultry breeds better adapted to outdoor free-range rearing. Yields per animal in such systems would naturally be lower, and probably also total output on herd level, despite healthier animals. However, this could be balanced on the consumption side by dietary change through decreased livestock consumption and an increase in plant-based food, as has been identified as necessary to reach e.g. climate goals (Bajželj et al. 2014). Ultimately, feeding cereals and legumes to livestock represents a ‘yield loss’ in total human food calories produced. Comparison of farming systems in terms of the trade-offs between food production and environmental impact requires the use of relevant metrics and it is worth considering whether the yield of human-edible energy or protein per hectare of land might be more relevant than yield per animal (van Zanten et al. 2016). How organic livestock production systems develop depends strongly on what consumers are willing to pay for and what policymakers are willing to support. However, based on the increased interest in sustainable foods and the flexitarian/vegetarian trend among young consumers (Mintel 2017), partly as a reaction to ‘industrialised’ livestock systems, we suggest that lower-yielding but more animal-friendly organic livestock systems are likely to be more acceptable than organic systems that mimic conventional systems. Interestingly, research has shown that European consumers who buy organic foods consume more fruit, vegetables, whole grains and legumes and less red and processed meat than other consumers (Kesse-Guyot et al. 2013; Bradbury et al. 2014; Eisinger-Watzl et al. 2015). Another means to reduce the need for more food is reduction of food waste during production and processing, and by consumers (Priefer et al. 2016). This review took its starting point in the need to raise yields in organic production. Naturally, much can be gained from better management on farms that substantially underperform in comparison with top-performing farms under the same conditions. However, one can argue that current yields on the best-performing organic farms in Northern Europe are (at least close to) ‘high enough’, taking into account the other outcomes from organic production (e.g. generally enhanced biodiversity, greater opportunities for animals to express natural behaviours and better profitability for farmers) and that there are greater opportunity to raise yields in areas of Africa and Asia with considerable yield gaps. Proponents of this reasoning might question the interpretation of the FAO projection of future food demand as a need to increase food production by 60% by 2050 (Alexandratos and Bruinsma 2012). On the other hand, if dietary and waste patterns are considered difficult to change, as research and practice on the promotion of healthier diets for e.g. weight reduction have shown (Douketis et al. 2005), maximising yields on all land is critically important to avoid expansion of agricultural land and associated loss of natural habitats. In any case, if organic farming systems are to deliver substantial amounts of food to future food systems and at the same time deliver multiple other benefits, as is the ambition according to the organic principles and as increasingly expected by consumers, the organic sector and its producers, breeding companies, advisory services, farmer associations and public policy all need to focus on a broad set of goals that complement those of crop yield per hectare and yield per animal. With this review, we show that strategies to increase yields in organic agriculture can bring several synergies, but there are also apparent risks that need to be recognised and managed. Notes Basic substances are a group of compounds of low concern that are not primarily designed as plant protection products, but may nonetheless be useful in plant protection. Basic substances have generally been used for a long time in other areas, with exposures to humans and environment. Of the current 15 EU-approved basic substances, 10 meet the definition of ‘foodstuff’ and are of animal or plant origin, e.g. whey and Urtica spp. extracts, and are therefore approved in organic agriculture (EC 2008).. EC (2008) COMMISSION REGULATION (EC) No 889/2008 of 5 September 2008 laying down detailed rules for the implementation of Council Regulation (EC) No 834/2007 on organic production and labelling of organic products with regard to organic production, labelling and control Hoffmann R and Wivstad M (2015) Why do (don’t) we buy organic food and do we get what we bargain for? EPOK–Centre for Organic Food and Farming, Swedish University of Agricultural Sciences, Uppsala. ISBN 978–91–576-9285-6 IFOAM (2015) Organic 3.0 for truly sustainable farming and consumption. Discussion paper by Markus Arbenz, David Gould and Christopher Stopes, based on think tanking by SOAAN and IFOAM - Organics International and launched at the ISOFAR International Organic EXPO 2015, Goesan County Leenstra F, Maurer V, Galea F et al (2014) Laying hen performance in different production systems; why do they differ and how to close the gap? Results of discussions with groups of farmers in The Netherlands, Switzerland and France, benchmarking and model calculations. Eur Poult Sci 78. https://doi.org/10.1399/eps.2014.53 VKM (2014) Part II: Animal health and welfare in Norway Comparison of organic and conventional food and food production (Opinion of the Panel on Animal Health and Welfare and the Steering Committee of the Norwegian Scientific Committee for Food Safety) Vol. 11–007-2-Final, Oslo Corresponding author Rights and permissions This article is published under an open access license. Please check the 'Copyright Information' section either on this page or in the PDF for details of this license and what re-use is permitted. If your intended use exceeds what is permitted by the license or if you are unable to locate the licence and re-use information, please contact the Rights and Permissions team.
(Ponisio et al. 2015) of conventional yields. Yield differences vary considerably with growing conditions, management practices and crop types, with legumes showing a considerably smaller yield gap than cereals or tubers. Based on 34 studies from Sweden, Finland and Norway, de Ponti et al. (2012) found that organic yields in this region are 70% of conventional yields. Yield statistics for Sweden from 2015 show that organic cereal yields in that year ranged between 53% (winter rye and winter wheat) and 58% (spring wheat) of conventional yields. Organic leguminous crops yielded 69% (peas) and 87% (field beans) of conventional crop yields and organic leys 87% of conventional yields. These values represent national averages for organic and conventional production, but geographical bias, which is present because there are more organic farms in regions less favourable for cropping, is not accounted for (SS 2016). Supply of nitrogen (N) and control of perennial weeds are two of the most important yield-limiting factors in organic crop production (Askegaard et al. 2011). These are linked, as sufficient N availability for rapid early establishment and growth of crops also has a strong influence on reducing weed infestation, by greater weed suppression ability of the crop (Olesen et al. 2007). Commonly used organic fertilisers such as manure, compost, green manure and organic wastes are low in plant-available N and this, in combination with slow N mineralisation in the spring due to low temperatures, restricts yields in organic crops, especially in the Nordic countries (Dahlin et al. 2005). Yield losses due to pests and disease also affect the organic-conventional yield gap.
yes
Organic Farming
Are yields from organic farming lower than those from conventional farming?
yes_statement
"yields" from "organic" "farming" are "lower" than those from "conventional" "farming".. "organic" "farming" produces "lower" "yields" compared to "conventional" "farming".
https://news.cornell.edu/stories/2005/07/organic-farms-produce-same-yields-conventional-farms
Organic farms produce same yields as conventional farms | Cornell ...
Organic farms produce same yields as conventional farms Organic farming produces the same yields of corn and soybeans as does conventional farming, but uses 30 percent less energy, less water and no pesticides, a review of a 22-year farming trial study concludes. David Pimentel, a Cornell University professor of ecology and agriculture, concludes, "Organic farming offers real advantages for such crops as corn and soybeans." Pimentel is the lead author of a study that is published in the July issue of Bioscience (Vol. 55:7) analyzing the environmental, energy and economic costs and benefits of growing soybeans and corn organically versus conventionally. The study is a review of the Rodale Institute Farming Systems Trial, the longest running comparison of organic vs. conventional farming in the United States. "Organic farming approaches for these crops not only use an average of 30 percent less fossil energy but also conserve more water in the soil, induce less erosion, maintain soil quality and conserve more biological resources than conventional farming does," Pimentel added. The study compared a conventional farm that used recommended fertilizer and pesticide applications with an organic animal-based farm (where manure was applied) and an organic legume-based farm (that used a three-year rotation of hairy vetch/corn and rye/soybeans and wheat). The two organic systems received no chemical fertilizers or pesticides. Inter-institutional collaboration included Rodale Institute agronomists Paul Hepperly and Rita Seidel, U.S. Department of Agriculture's Agricultural Research Service research microbiologist David Douds Jr. and University of Maryland agricultural economist James Hanson. The research compared soil fungi activity, crop yields, energy efficiency, costs, organic matter changes over time, nitrogen accumulation and nitrate leaching across organic and conventional agricultural systems. "First and foremost, we found that corn and soybean yields were the same across the three systems," said Pimentel, who noted that although organic corn yields were about one-third lower during the first four years of the study, over time the organic systems produced higher yields, especially under drought conditions. The reason was that wind and water erosion degraded the soil on the conventional farm while the soil on the organic farms steadily improved in organic matter, moisture, microbial activity and other soil quality indicators. The fact that organic agriculture systems also absorb and retain significant amounts of carbon in the soil has implications for global warming, Pimentel said, pointing out that soil carbon in the organic systems increased by 15 to 28 percent, the equivalent of taking about 3,500 pounds of carbon dioxide per hectare out of the air. Among the study's other findings: In the drought years, 1988 to 1998, corn yields in the legume-based system were 22 percent higher than yields in the conventional system. The soil nitrogen levels in the organic farming systems increased 8 to 15 percent. Nitrate leaching was about equivalent in the organic and conventional farming systems. Organic farming reduced local and regional groundwater pollution by not applying agricultural chemicals. Pimentel noted that although cash crops cannot be grown as frequently over time on organic farms because of the dependence on cultural practices to supply nutrients and control pests and because labor costs average about 15 percent higher in organic farming systems, the higher prices that organic foods command in the marketplace still make the net economic return per acre either equal to or higher than that of conventionally produced crops. Organic farming can compete effectively in growing corn, soybeans, wheat, barley and other grains, Pimentel said, but it might not be as favorable for growing such crops as grapes, apples, cherries and potatoes, which have greater pest problems. The study was funded by the Rodale Institute and included a review of current literature on organic and conventional agriculture comparisons. According to Pimentel, dozens of scientific papers reporting on research from the Rodale Institute Farming Systems Trial have been published in prestigious refereed journals over the past 20 years.
Organic farms produce same yields as conventional farms Organic farming produces the same yields of corn and soybeans as does conventional farming, but uses 30 percent less energy, less water and no pesticides, a review of a 22-year farming trial study concludes. David Pimentel, a Cornell University professor of ecology and agriculture, concludes, "Organic farming offers real advantages for such crops as corn and soybeans." Pimentel is the lead author of a study that is published in the July issue of Bioscience (Vol. 55:7) analyzing the environmental, energy and economic costs and benefits of growing soybeans and corn organically versus conventionally. The study is a review of the Rodale Institute Farming Systems Trial, the longest running comparison of organic vs. conventional farming in the United States. "Organic farming approaches for these crops not only use an average of 30 percent less fossil energy but also conserve more water in the soil, induce less erosion, maintain soil quality and conserve more biological resources than conventional farming does," Pimentel added. The study compared a conventional farm that used recommended fertilizer and pesticide applications with an organic animal-based farm (where manure was applied) and an organic legume-based farm (that used a three-year rotation of hairy vetch/corn and rye/soybeans and wheat). The two organic systems received no chemical fertilizers or pesticides. Inter-institutional collaboration included Rodale Institute agronomists Paul Hepperly and Rita Seidel, U.S. Department of Agriculture's Agricultural Research Service research microbiologist David Douds Jr. and University of Maryland agricultural economist James Hanson. The research compared soil fungi activity, crop yields, energy efficiency, costs, organic matter changes over time, nitrogen accumulation and nitrate leaching across organic and conventional agricultural systems.
no
Organic Farming
Are yields from organic farming lower than those from conventional farming?
yes_statement
"yields" from "organic" "farming" are "lower" than those from "conventional" "farming".. "organic" "farming" produces "lower" "yields" compared to "conventional" "farming".
https://www.sciencedirect.com/science/article/pii/S0167880917305595
Crop yield gap and stability in organic and conventional farming ...
Highlights We used data from a 13-year old farming systems comparison in the Netherlands. • The yield gap between organic and conventional farming diminished over time. • This coincided with higher nutrient use efficiency and spatial stability in the organic system. • Transition from conventional to organic results in fundamental changes in soil properties. Abstract A key challenge for sustainable intensification of agriculture is to produce increasing amounts of food and feed with minimal biodiversity loss, nutrient leaching, and greenhouse gas emissions. Organic farming is considered more sustainable, however, less productive than conventional farming. We analysed results from an experiment started under identical soil conditions comparing one organic and two conventional farming systems. Initially, yields in the organic farming system were lower, but approached those of both conventional systems after 10–13 years, while requiring lower nitrogen inputs. Unexpectedly, organic farming resulted in lower coefficient of variation, indicating enhanced spatial stability, of pH, nutrient mineralization, nutrient availability, and abundance of soil biota. Organic farming also resulted in improved soil structure with higher organic matter concentrations and higher soil aggregation, a profound reduction in groundwater nitrate concentrations, and fewer plant-parasitic nematodes. Temporal stability between the three farming systems was similar, but when excluding years of Phytophthora outbreaks in potato, temporal stability was higher in the organic farming system. There are two non-mutually exclusive mechanistic explanations for these results. First, the enhanced spatial stability in the organic farming system could result from changes in resource-based (i.e. bottom-up) processes, which coincides with the observed higher nutrient provisioning throughout the season in soils with more organic matter. Second, enhanced resource inputs may also affect stability via increased predator-based (i.e. top-down) control. According to this explanation, predators stabilize population dynamics of soil organisms, which is supported by the observed higher soil food web biomass in the organic farming system.We conclude that closure of the yield gap between organic and conventional farming can be a matter of time and that organic farming may result in greater spatial stability of soil biotic and abiotic properties and soil processes. This is likely due to the time required to fundamentally alter soil properties.
Highlights We used data from a 13-year old farming systems comparison in the Netherlands. • The yield gap between organic and conventional farming diminished over time. • This coincided with higher nutrient use efficiency and spatial stability in the organic system. • Transition from conventional to organic results in fundamental changes in soil properties. Abstract A key challenge for sustainable intensification of agriculture is to produce increasing amounts of food and feed with minimal biodiversity loss, nutrient leaching, and greenhouse gas emissions. Organic farming is considered more sustainable, however, less productive than conventional farming. We analysed results from an experiment started under identical soil conditions comparing one organic and two conventional farming systems. Initially, yields in the organic farming system were lower, but approached those of both conventional systems after 10–13 years, while requiring lower nitrogen inputs. Unexpectedly, organic farming resulted in lower coefficient of variation, indicating enhanced spatial stability, of pH, nutrient mineralization, nutrient availability, and abundance of soil biota. Organic farming also resulted in improved soil structure with higher organic matter concentrations and higher soil aggregation, a profound reduction in groundwater nitrate concentrations, and fewer plant-parasitic nematodes. Temporal stability between the three farming systems was similar, but when excluding years of Phytophthora outbreaks in potato, temporal stability was higher in the organic farming system. There are two non-mutually exclusive mechanistic explanations for these results. First, the enhanced spatial stability in the organic farming system could result from changes in resource-based (i.e. bottom-up) processes, which coincides with the observed higher nutrient provisioning throughout the season in soils with more organic matter. Second, enhanced resource inputs may also affect stability via increased predator-based (i.e. top-down) control. According to this explanation, predators stabilize population dynamics of soil organisms, which is supported by the observed higher soil food web biomass in the organic farming system.
no
Organic Farming
Are yields from organic farming lower than those from conventional farming?
yes_statement
"yields" from "organic" "farming" are "lower" than those from "conventional" "farming".. "organic" "farming" produces "lower" "yields" compared to "conventional" "farming".
https://news.mongabay.com/2011/09/organic-farming-can-be-more-profitable-in-the-long-term-than-conventional-agriculture/
Organic farming can be more profitable in the long-term than ...
Organic farming can be more profitable in the long-term than conventional agriculture Organic farming is more profitable and economically secure than conventional farming even over the long-term, according to a new study in Agronomy Journal. Using experimental farm plots, researchers with the University of Minnesota found that organic beat conventional even if organic price premiums (i.e. customers willing to pay more for organic) were to drop as much as 50 percent. “Doing an economic study like this, it’s important to get as complete a picture of the yield variability as we can,” explains Timothy Delbridge, lead author of the study and a doctoral student studying agricultural economics at the University of Minnesota. “So, the length of this trial is a big asset. We’re pretty confident that the full extent of the yield variability came through in the results.” Conducted over 18 years, the study found that a conventional farm, rotating corn, soy, oat, and alfalfa over 4 years brought in $273, while an organic farm netted $538. Even if the organic premium dropped by half, it would still be more profitable given that the cost of production was lower for organic, since organic farmers would spend nothing on chemicals. “What we’re looking at here are results between an established organic and an established conventional system. This research doesn’t take into consideration the issue of the transition itself: how difficult or costly that may be,” cautions Delbridge. Organic farming—which excludes the use of pesticides, herbicides, and GMOs—is considered better for environment, including less pollution, better use of water, and biodiversity-friendly practices. Findings vary, but studies have shown that organic farming is capable of producing similar yields to conventional farming. Organic farms also withstand natural disasters—such as droughts and hurricanes—better than conventional farming, which may be increasingly important in a world undergoing climate change. (11/08/2010) Strawberry plants grown on commercial organic farms yield higher-quality fruit and have healthier soil than those grown conventionally, according to a study published on 1 September in the journal PLoS One. The research suggests that sustainable farming practices can produce nutritious fruit, if farmers manage soil and its beneficial microbes properly. This is among the most comprehensive studies to investigate how conventional and organic farming methods affect both fruit and soil quality. (09/07/2010) Forest carbon payment programs like the proposed reducing emissions from deforestation and degradation (REDD) mechanism could put pressure on wildlife-friendly farming techniques by increasing the need to intensify agricultural production, warns a paper published this June in Conservation Biology. The paper, written by Jaboury Ghazoul and Lian Pin Koh of ETH Zurich and myself in September 2009, posits that by increasing the opportunity cost of conversion of forest land for agriculture, REDD will potentially constrain the amount of land available to meet growing demand for food. Because organic agriculture and other biodiversity-friendly farming practices generally have lower yields than industrial agriculture, REDD will therefore encourage a shift toward from more productive forms of food production. (02/05/2009) Embracing more sustainable farming methods is the only way for the world’s farmers to grow enough food to meet the demands of a growing population and respond to climate change, the top crop expert with the United Nations Food and Agriculture Organization (FAO) said today.
Organic farming can be more profitable in the long-term than conventional agriculture Organic farming is more profitable and economically secure than conventional farming even over the long-term, according to a new study in Agronomy Journal. Using experimental farm plots, researchers with the University of Minnesota found that organic beat conventional even if organic price premiums (i.e. customers willing to pay more for organic) were to drop as much as 50 percent. “Doing an economic study like this, it’s important to get as complete a picture of the yield variability as we can,” explains Timothy Delbridge, lead author of the study and a doctoral student studying agricultural economics at the University of Minnesota. “So, the length of this trial is a big asset. We’re pretty confident that the full extent of the yield variability came through in the results.” Conducted over 18 years, the study found that a conventional farm, rotating corn, soy, oat, and alfalfa over 4 years brought in $273, while an organic farm netted $538. Even if the organic premium dropped by half, it would still be more profitable given that the cost of production was lower for organic, since organic farmers would spend nothing on chemicals. “What we’re looking at here are results between an established organic and an established conventional system. This research doesn’t take into consideration the issue of the transition itself: how difficult or costly that may be,” cautions Delbridge. Organic farming—which excludes the use of pesticides, herbicides, and GMOs—is considered better for environment, including less pollution, better use of water, and biodiversity-friendly practices. Findings vary, but studies have shown that organic farming is capable of producing similar yields to conventional farming. Organic farms also withstand natural disasters—such as droughts and hurricanes—better than conventional farming, which may be increasingly important in a world undergoing climate change. (11/08/2010)
no
Organic Farming
Are yields from organic farming lower than those from conventional farming?
no_statement
"yields" from "organic" "farming" are not "lower" than those from "conventional" "farming".. "organic" "farming" does not result in "lower" "yields" compared to "conventional" "farming".
https://organicinsider.com/newsletter/organic-farming-impact-on-the-environment-your-weekly-organic-insider/
Why Claims That Organic is Worse for the Environment Do Not Hold ...
Why Claims That Organic is Worse for the Environment Do Not Hold Up (Today’s commentary is written by Stephanie Strom, who grew to know and love the organic industry during her six-plus year tenure as the food business reporter at The New York Times.) Dominating the headlines recently has been a study out of the UK which claims that organic farming is bad for the environment. Not exactly. In the report, which assesses the potential changes to net greenhouse gas (GHG) emissions if England and Wales shifted to 100% organic food production, it clearly acknowledges that organic farming might contribute to a reduction in GHG emissions “through decreased use of farm inputs and increased soil carbon sequestration.” Nonetheless, the authors contend that organic’s positive environmental impact “must be set against the need for increased production and associated land conversion elsewhere as a result of lower crop and livestock yields under organic methods.” The crux of their argument is that organic will result in yields 40% lower when compared to conventional farming. Even if you trust the data collected on agricultural yields, which many scientists admit is less than perfect, relying on it to make a case that modern conventional farming and animal husbandry are better for the planet than traditional organic is a mistake. “If you look at the last ten years, organic yields have been skyrocketing as research on organic crops has increased,” said Dr. Jessica Shade, director of science programs at The Organic Center. “In many crops, we’re getting to where the yield gap is small or doesn’t exist.” Dr. Shade pointed out that while the amount of money spent on research into organic crops has climbed, it still is a fraction of the many millions of dollars invested by the government and private industry to improve conventional crop yields. A critical factor, which the study fails to acknowledge, is how yields will change over time. “The British study makes an assumption that yields in organic production will underperform conventional by 40% forever, but that’s not true,” said Dr. Yichao Rui, soil scientist at the Rodale Institute. “If you keep depleting the soil and its microbiome, you won’t sustain those conventional yields over time, and the soil will be less and less resilient, which is not good for future climate change scenarios.” Rodale Institute, which has decades of experience running its own test fields, has found no significant difference in yields of conventional and organic small grains, such as wheat. It also has found organic and conventional potato yields to be virtually the same. Historically, the difference between organic and conventional yields has been set at about 20%, a ballpark figure confirmed by an authoritative meta-analysis of 115 studies comparing organic and conventional yields by the University of California Berkeley in 2015. The Berkeley study also dug deeper and concluded that on older organic farms, where organic practices like cover cropping and crop rotation have had time to work their magic, the yield gap shrank below 10%. After years of investment in the soil that encourages nutrient retention and nurtures the fungi, microbes and bacteria to interact with plant roots in beneficial ways, organic farm yields can easily rival those of conventional and with less water, less energy and the use of natural pesticides. One final component that the study does not adequately consider is the impact that toxic pesticides have on soil health, which directly influences plant yields. * Kahumana Farms, a certified organic farm in Hawaii, uses its profits to build homes for the homeless, provides school lunches for low-income children and runs learning centers for the developmentally disabled. #grateful #amazing New Organic Products The medium roast, organic vanilla flavored coffee from Thrive Market is made from high-quality arabica beans that are ethically sourced directly from Peruvian farmers in the Finca Churupampa co-op. The growers use regenerative farming practices, and this partnership not only provides a livelihood for these farmers but also helps to support their local communities. Butterfly Bones Organics has introduced its Ingrown Hair Remedy product. It was designed to provide relief for those dealing with painful swelling and redness caused by ingrown hairs, which can turn up after using hair removal methods such as waxing, sugaring or shaving. Weekly News Summaries By Tom Karst At a House Agriculture Committee, organic food executives urged politicians to ensure that the USDA provides strong leadership and oversight of the sector, particularly when the regulatory process fails to keep up with market demands. By Million Belay and Timothy A. Wise Certain policies, strongly promoted by the Gates Foundation, open Africa to the multinational seed companies in the name of modernization, but they undermine climate resilience and food security for Africa’s small-scale farmers. * Kahumana Farms, a certified organic farm in Hawaii, uses its profits to build homes for the homeless, provides school lunches for low-income children and runs learning centers for the developmentally disabled. #grateful #amazing
“If you look at the last ten years, organic yields have been skyrocketing as research on organic crops has increased,” said Dr. Jessica Shade, director of science programs at The Organic Center. “In many crops, we’re getting to where the yield gap is small or doesn’t exist.” Dr. Shade pointed out that while the amount of money spent on research into organic crops has climbed, it still is a fraction of the many millions of dollars invested by the government and private industry to improve conventional crop yields. A critical factor, which the study fails to acknowledge, is how yields will change over time. “The British study makes an assumption that yields in organic production will underperform conventional by 40% forever, but that’s not true,” said Dr. Yichao Rui, soil scientist at the Rodale Institute. “If you keep depleting the soil and its microbiome, you won’t sustain those conventional yields over time, and the soil will be less and less resilient, which is not good for future climate change scenarios.” Rodale Institute, which has decades of experience running its own test fields, has found no significant difference in yields of conventional and organic small grains, such as wheat. It also has found organic and conventional potato yields to be virtually the same. Historically, the difference between organic and conventional yields has been set at about 20%, a ballpark figure confirmed by an authoritative meta-analysis of 115 studies comparing organic and conventional yields by the University of California Berkeley in 2015. The Berkeley study also dug deeper and concluded that on older organic farms, where organic practices like cover cropping and crop rotation have had time to work their magic, the yield gap shrank below 10%.
no
Organic Farming
Are yields from organic farming lower than those from conventional farming?
no_statement
"yields" from "organic" "farming" are not "lower" than those from "conventional" "farming".. "organic" "farming" does not result in "lower" "yields" compared to "conventional" "farming".
https://www.nrdc.org/bio/lena-brook/organic-agriculture-helps-solve-climate-change
Organic Agriculture Helps Solve Climate Change
Breadcrumb Organic Agriculture Helps Solve Climate Change As farmers grapple with everything from extreme weather events to heat stress to wildfires, and agriculture becomes less predictable in the face of a changing climate, it is essential for governments to help farmers transition to practices that increase resilience and dramatically decrease reliance on fossil-fuel based chemicals. June 9, 2022 New and beginning farmers on the 100-acre Agricultural Land Based Training Association (ALBA) organic farm in Salinas, CA For the past year, the California Air Resources Board (CARB) has been developing its 2022 Draft Climate Change Scoping Plan, intended to carve out a path to carbon neutrality for California by mid-century. After months of advocacy from NRDC and its allies calling on CARB to include incentives for organic farming as well as pesticide use reduction in the Natural Working Lands section, the agency released its proposed approach in May. In this draft, the agency recommends converting 20% of California’s agricultural lands to organic agriculture by 2045 as a way to mitigate climate change. While this recommendation is not nearly ambitious enough (California’s organic acreage grew by 44% from 2014 to 2019 according to a report from the state’s Department of Agriculture), it is nonetheless an important milestone because it recognizes and affirms the essential role that organic farming systems can play in climate-smart agriculture. Organic agriculture is an important lever in moving the needle on climate change. Here’s why: Organic Farming Reduces Greenhouse Gases Because fossil fuel-based fertilizers and most synthetic pesticides are prohibited in organic farming, it has a significantly lower carbon footprint. The production of these farm chemicals are energy intensive. Studies show that the elimination of synthetic nitrogen fertilizers alone, as is required in organic systems, could lower direct global agricultural greenhouse gas emissions by about 20%. A forty-year study conducted by the Rodale Institute also showed that organic farms use 45% less energy compared to conventional farms (while maintaining or even exceeding yields after a 5-year transition period.) Meanwhile, fumigant pesticides - commonly used on crops like strawberries and injected into soil - emit nitrous oxide (N2O), the most potent greenhouse gas. Research indicates that one commonly used fumigant pesticides, chloropicrin, can increase N2O emissions by 700-800%. Two other fumigants (metam sodium and dazomet) are also known to significantly increase N2O output. Soil-boosting practices that are the foundation of organic agriculture also help sequester more carbon in soil compared to non-organic systems. Multiple meta-analyses comparing thousands of farms nationwide have shown that organic agriculture results in higher stable soil organic carbon and reduced nitrous oxide (N2O) emissions when compared to conventional farming. A recent review of almost 400 studies showed pesticide use was associated with damage to soil invertebrates in more than 70% of the studies. Soil invertebrates are critical to carbon sequestration, because they are responsible for the formation of soil components that are essential to building soil organic carbon. In fact, estimates indicate that with worldwide adoption of agroecological best management practices like diversified organic farming, soils could actually absorb more carbon than the farming sector emits between 2020 and 2100. Organic Farming Increases Resilience A welcome sign for Huerta del Valle (HdV), a 4-acre organic farm in a low-income urban community in Ontario, CA that faces severe drought. HdV grows over 100 different crops. Credit: USDA Photo by Lance Cheung Organic farms are required to build healthy soil and crops that make them better able to adapt in a changing climate. First and foremost, organic farmers rely on composting, crop rotation, and natural rather than fossil fuel-based inputs in order to maintain or improve soil health. As stewards of healthy soil, organic farmers and ranchers can be a major force for climate mitigation (U.S. Department of Agriculture Secretary Vilsack confirmed as much during the recent announcement of the new USDA framework for resilient food and farming systems). Organic farming promotes resiliency by boosting soil’s ability to retain water and the natural nutrients found in healthy soils. By increasing organic matter in soil continuously over time, organic agriculture improves water percolation by 15-20%, replenishing groundwater and helping crops perform well in extreme weather like drought and flooding. A decades-long organic farming trial found that organic yields can be up to 40% higher than nonorganic farms in drought years. By foregoing most fossil fuel-based inputs, organic farmers are also more resilient and adaptable not only to stressors related to climate change but also other disruptive global stressors. As farmers grapple with everything from extreme weather events to heat stress and wildfires, and agriculture becomes even less predictable in the face of a changing climate, it is essential for governments to help farmers transition to practices that increase resilience and dramatically decrease reliance on fossil-fuel based chemicals. Setting ambitious goals—as the European Union has done with its 2020 Farm to Fork Strategy—is a critical first step. The California Air Resources Board has moved in the right direction by recognizing that organic agriculture can play an important role in our state’s climate plan. However, CARB ought to stretch its ambitions as it develops its final plan to maximize the climate potential of California’s organic agriculture sector. Related Blogs Our public investments in organic food production have not kept pace with the organic sector’s growth – and that’s a missed opportunity for domestic economic development. The Opportunities in Organic Act would help U.S. farms meet consumer demand by reducing…
5% less energy compared to conventional farms (while maintaining or even exceeding yields after a 5-year transition period.) Meanwhile, fumigant pesticides - commonly used on crops like strawberries and injected into soil - emit nitrous oxide (N2O), the most potent greenhouse gas. Research indicates that one commonly used fumigant pesticides, chloropicrin, can increase N2O emissions by 700-800%. Two other fumigants (metam sodium and dazomet) are also known to significantly increase N2O output. Soil-boosting practices that are the foundation of organic agriculture also help sequester more carbon in soil compared to non-organic systems. Multiple meta-analyses comparing thousands of farms nationwide have shown that organic agriculture results in higher stable soil organic carbon and reduced nitrous oxide (N2O) emissions when compared to conventional farming. A recent review of almost 400 studies showed pesticide use was associated with damage to soil invertebrates in more than 70% of the studies. Soil invertebrates are critical to carbon sequestration, because they are responsible for the formation of soil components that are essential to building soil organic carbon. In fact, estimates indicate that with worldwide adoption of agroecological best management practices like diversified organic farming, soils could actually absorb more carbon than the farming sector emits between 2020 and 2100. Organic Farming Increases Resilience A welcome sign for Huerta del Valle (HdV), a 4-acre organic farm in a low-income urban community in Ontario, CA that faces severe drought. HdV grows over 100 different crops. Credit: USDA Photo by Lance Cheung Organic farms are required to build healthy soil and crops that make them better able to adapt in a changing climate. First and foremost, organic farmers rely on composting, crop rotation, and natural rather than fossil fuel-based inputs in order to maintain or improve soil health.
no
Cryptocurrency
Can Bitcoin and other cryptocurrencies be manipulated easily?
yes_statement
"bitcoin" and other "cryptocurrencies" can be "easily" "manipulated".. manipulating "bitcoin" and other "cryptocurrencies" is easy.
https://jfin-swufe.springeropen.com/articles/10.1186/s40854-022-00364-3
Manipulation of the Bitcoin market: an agent-based study | Financial ...
Abstract Fraudulent actions of a trader or a group of traders can cause substantial disturbance to the market, both directly influencing the price of an asset or indirectly by misinforming other market participants. Such behavior can be a source of systemic risk and increasing distrust for the market participants, consequences that call for viable countermeasures. Building on the foundations provided by the extant literature, this study aims to design an agent-based market model capable of reproducing the behavior of the Bitcoin market during the time of an alleged Bitcoin price manipulation that occurred between 2017 and early 2018. The model includes the mechanisms of a limit order book market and several agents associated with different trading strategies, including a fraudulent agent, initialized from empirical data and who performs market manipulation. The model is validated with respect to the Bitcoin price as well as the amount of Bitcoins obtained by the fraudulent agent and the traded volume. Simulation results provide a satisfactory fit to historical data. Several price dips and volume anomalies are explained by the actions of the fraudulent trader, completing the known body of evidence extracted from blockchain activity. The model suggests that the presence of the fraudulent agent was essential to obtain Bitcoin price development in the given time period; without this agent, it would have been very unlikely that the price had reached the heights as it did in late 2017. The insights gained from the model, especially the connection between liquidity and manipulation efficiency, unfold a discussion on how to prevent illicit behavior. Introduction Cryptocurrencies are a digital alternative to legal fiat money. Rather than being issued by competent governmental authorities, their implementation is based on the principles of cryptography used to validate all transactions and generate new currency. Every transaction that occurs is recorded in a public ledger.Footnote 1 The blockchain, and more in general distributed ledgers, facilitate innovation in multiple domains of activity. These include, but are not limited to, supply chain management, data sharing, accounting, e-voting, or, as the most prominent area, finance [see, e.g., the overview in Casino et al. (2019)]. While it is indisputable that the blockchain by itself had and has a great influence on public discourse, with innovation potential comparable to that of the Internet (as it fosters a decentralized infrastructure for economic transactions), financial experts remain generally skeptical. The implementation and the characteristics (including the strictly technological ones) of blockchain technology, when proposed as a replacement for standard fiat currency are subject to ongoing discussion (Berentsen and Schär 2018; Dierksmeier and Seele 2018; Ertz and Boily 2019; Glaser and Bezzenberger 2015). A major problem surrounding cryptocurrencies—but also, one of the reasons why they have become well known to the general public—are the heavy tails of their return distribution (Chan et al. 2017) and their volatility (Bariviera 2017), resulting in a rich history of “bubbles” (Gerlach et al. 2018). Although the innovative potential of distributed ledger technologies is vast, the innovation itself does not necessarily translate into trust (see, e.g., Bodó 2021). Traditional markets and exchanges were fairly successful in establishing a trustworthy environment via governmental or international institutions, robust legislative activity, market regulations, and effective monitoring/oversight systems. This development took many decades after a long history of market abuse (Putniņš 2012), and remains an area of active research. It can be said that each new case of market abuse brought a better understanding of market vulnerabilities and often led to viable countermeasures. Furthermore, every new technology potentially brings new techniques for committing fraud. Now, cryptocurrencies, crypto-assets, and various forms of blockchain services are still in their infancy. Therefore, new methods need to be invented or reinvented for this new medium to establish a reliable and fair market environment, ideally while maintaining the decentralized and (semi)anonymous nature of the underlying blockchain technology. With this motivation, we focus in this study on one example where cryptocurrency market was supposedly manipulated via fraudulent actions of one market participant. A data-driven model is developed and validated using historical data. The behavior of the fraudulent entity is investigated in detail and included in the model. Toward the end, we conclude our investigations with a discussion on how our findings can be applied to improve trust by reducing the present vulnerabilities of crypto-markets. In the remainder of this section, we will provide a brief overview of the study on frauds on cryptocurrencies, on agent-based modeling (especially in the context of crypto markets), and we will then highlight the specific contributions of this paper. Fraud and cryptocurrencies Several illicit activities are related to cryptocurrencies, such as black-market trading (Foley et al. 2019), money laundering, and terrorist financing (Fletcher et al. 2021).Footnote 2 In our case, we focus on fraud that targets and disrupts the market. A more common form of fraud in crypto markets is wash trading (Cong et al. 2020; Victor and Weintraud 2021). The principle of wash trading is to execute trades where the buyer and seller are the same entity. Thus, false impressions of highly traded assets are created to mislead investors. Another more serious form of fraud observed in crypto markets is pump-and-dump schemes (Kamps and Kleinberg 2018), which typically take the form of coordinated actions to increase the market price in a short time period (Hamrick et al. 2019; Li et al. 2018). In the literature, we find various studies that attempt to explain price as a direct consequence of manipulative behavior. A study (Gandal et al. 2018) analyzed suspicious market practices on the Mt.Cox exchange concludes that fraudulent actions influenced the price growth from $150 to $1000 in late 2013. More recently, Griffin and Shams (2019) argue that the Bitcoin market price might have been inflated by the issuance of Tether. As observed in a 2014 study (Robleh et al. 2014), Bitcoin and other cryptocurrencies served as a medium of exchange for a relatively small number of people; therefore, they pose no serious material risk to monetary and financial stability, but today investors increasingly involve crypto-assets in their portfolios, and some large companies or payment services are already accepting payments in Bitcoin. This means that cryptocurrency volatility can potentially be a new source of systemic risk to the entire economy and financial sector. Recent studies have approached risk using methods such as clustering (e.g., Li et al. 2021), multi-objective feature selection (e.g., Kou et al. 2021), or network analysis (e.g., Anagnostou et al. 2018). Focusing more on the source of systemic risk originating in illicit behavioral schemes, although advances in detection of wash trading (Victor and Weintraud 2021) and pump-and-dump schemes (Chen et al. 2019) are already taking place, new models are needed that can explain, simulate, or possibly predict the effects of fraudulent behavior, and that can serve as a testbed for testing the effectiveness of policies, regulations, or monitoring enforcement mechanisms. One way to satisfy this demand is to consider models that combine qualitative and quantitative knowledge, which can be designed with a strong reliance on empirical data and can simulate various scenarios to address questions regarding the effectiveness of regulatory interventions in the crypto market, as discussed in Shanaev et al. (2020). Agent-based modelling Agent-based models generally aim to explain some complex phenomena, where the emergent behavior at the macro-level is hypothesized to be a consequence of behavioral rules at the micro-level. For a historical review, we refer to Chen (2012). In recent years, this modeling paradigm has been enhanced by more modern data-driven approaches, where behavioral data specific to each agent are used to construct, initialize, or estimate the parameters of a model of each agent’s decision mechanism. Only a relatively small number of parameters are left to be calibrated for the aggregated data, which increases the model’s validity and credibility. With this approach, even large-scale models are capable of rivaling the predictive power of traditional quantitative methods, for example, in the area of economic research (Poledna et al. 2019). These models can be particularly instrumental if the parameters of individual agents are of vital importance, for example, to test interventions during the COVID-19 pandemic (Kerr et al. 2021). In the literature, several examples of agent-based models can be found that have been created to gain insights into crypto markets. Most of these models are based on various financial or behavioral assumptions. To the best of our knowledge, the first study in this area is Luther (2013), where agents are put into a currency market with switching costs and network effects to investigate the widespread acceptance of cryptocurrency. A similar question was studied by Bornholdt and Sneppen (2014). An implicit assumption of demand was made in Cocco et al. (2017), enhanced by speculative traders and restricted by finite resources for each agent, and is the earliest example of a limit order book-based model of the Bitcoin market attempting to explain the price increase from the start of 2012 to April 2014. This model was later extended by mining (Cocco and Marchesi 2016) and evolutionary computation (Cocco et al. 2019). Other order book models are presented in Pyromallis and Szabo (2019) and Zhou et al. (2017), where the focus is mainly on the adaptive behavior of traders. In Lee et al. (2018), a combination of inverse reinforcement learning directly from Bitcoin blockchain data and order book agent-based modeling was used to make short-term predictions of the market price. Recently, models focusing on policy recommendations have also been developed. Shibano et al. (2020) is introducing a price stabilization agent to reduce the volatility, and Bartolucci et al. (2020) investigates design extension of the Bitcoin blockchain to increase transaction efficiency. A strong aspect of the agent-based models is that they provide an experimental environment for policymakers. Once a behavioral schema is identified, methods to measure and assess the consequences are settled, and the consequences are measured; the simulated environment can be utilized to test the effectiveness of certain measures, that is, a set of alternative policies to be tested, given some adaptation rate, monitoring, enforcement, and identify the best one. In recent review (Lopez-Rojas and Axelsson 2016) agent-based models are considered as a tool for generating synthetic data for machine learning models, which can be used, for example, to complement more traditional evaluation methods (Kou et al. 2014). Most notably, agent-based models were developed in the area of urban crime modeling (Groff et al. 2019) or to study the behavioral aspects of tax evasion (Pickhardt and Prinz 2014). In principle, these models are not limited only to observed fraudulent behavior: they can extend the design of fraud committing agents by considering different schemes of market manipulation methods to measure and assess the consequences. By choosing a suitable representation of the fraud schema, it is possible to find more sophisticated patterns of reasoning for a fraudulent agent [e.g., by applying algorithmic evolutionary methods (Hemberg et al. 2016)]. Contributions Most studies focus on analyzing the statistical relationship between price and a set of exogenous variables. Conversely, in this study, we focus on the qualitative explanation dimension. Our approach builds on the qualitative findings in Griffin and Shams (2019), but, in contrast to this study, we construct a data-driven model, focusing mainly on the causal influence of the fraudulent behavior that supposedly inflated the Bitcoin price. This methodological innovation can be regarded as the main contribution of this study, along with the conceptualization of a specific fraud schema as an algorithm that can be executed by an agent in a simulated cryptocurrency market. Note that this approach opens the door to a broader view on the role of the fraudulent trader in the Bitcoin market, thus allowing to analyze the situation from various points of view. For instance, as our market model is capable of generating market data such as the market price, the market volume or the Bitcoin inflow of the fraudulent trader, it is possible to compare these quantities to empirical data. In particular, we discover that certain anomalies in market volume or dips in market price can be attributed to the actions of a fraudulent trader, an experimental conclusion, which completes the evidence presented in Griffin and Shams (2019). Furthermore, the model developed in this study allows us to investigate specific reasons behind the success of the market manipulation via the fraud schema. Connections between the efficiency of a specific manipulation strategy and transaction costsFootnote 3 will be explored. To do so: a realistic model of order book liquidity has to be implemented. Most studies implicitly or explicitly assume sufficient liquidity near the mid-price and an exponential decrease in liquidity further away from the mid-price, using a Gaussian assumption, or more relaxed forms.Footnote 4 We propose a new liquidity distribution model based on a mixture of two components. The Gaussian assumption is kept near the mid-price, and beta distribution is used to model the situation more deeply in the order book. The study of market manipulations (and their consequences) has a long tradition in the economic literature (Putniņš 2012). To the best of our knowledge, the present study is the first to construct an agent that reproduces the actions of a fraudulent trader directly using blockchain transaction data, and reconstructing the market behavior from this predictor. In addition, our simulation environment can be easily expanded with more sophisticated artificial intelligence models, thus contributing to the active area of research concerned by the integration of artificial intelligence with blockchain technology (Pandl et al. 2020; Salah et al. 2019). Focusing on the economic study dimension of the paper, most of the assumptions we formulate to construct the proposed computational model attempt to provide a sound story (based on previous studies analyzing the Bitcoin market) aiming to reconstruct market behavior in a given time period. Our findings might challenge the opinion that the main predictors of the Bitcoin bubble of late 2017 and the beginning of 2018 would be variables associated with the market sentiment (see Kapar and Olmo 2021). While we do not deny that market sentiment plays a major role, our results confront the thesis that the occurrence of this price bubble is spontaneous or a consequence of the widespread popularity of Bitcoin. In this sense, we contribute to the ongoing discussion among economists on the price formation of cryptocurrencies. Background This section elaborates on the alleged price manipulation using Tether in 2017/18, presenting the technology at stake, the associated socio-technical system, and considerations shared in the relevant literature. What is Tether and why is it controversial Tether is a cryptocurrency whose market price is pegged to the US dollar, making it one of the so-called stablecoins. The objective of Tether is to facilitate transactions between cryptocurrency exchanges, making them easier for traders than with fiat money because many exchanges have challenges in establishing banking relationships and meeting their strict regulatory requirements. Tether is issued by Tether Limited, which claims that every issued Tether is backed by one dollar. Tether Limited publishes end of month (EoM) statements to prove this. This claim is somewhat controversial from several points of view, as discussed in Griffin and Shams (2019), pointing out suspicious auditing methods. Publishing the statement about the reserves potentially gives leverage to the issuer to issue more Tether than the current amount of capital reserves in between the audits. Following a series of investigations started by the New York Attorney General Letitia James filing a suit in April 2019, Bitfinex and Tether agreed to pay a penalty of $18.5 million in a settlement in February 2021. Furthermore, on February 23rd, Attorney General James claimed that Tether had lied about its reserves.Footnote 5 One of the first exchanges to accept Tether, and a close associate to Tether Limited by several shareholders, is the Bitfinex exchange. The analysis (Griffin and Shams 2019) exposed and analyzed suspicious flows of Tether from the Bitfinex exchange to other exchanges that accept Tether, mainly Bittrex and Poloniex. Before arriving at the target exchanges, the flow passes through several addresses on the Tether blockchain. Once the Tether is exchanged for Bitcoin, Bitcoin flows back to Bitfinex. As analyzed in their study, these flows were highly correlated with the price increase. Additionally, Griffin and Shams (2019) identified the dominant addresses and concluded that the addresses were likely controlled by the same individual. We will use these insights to model the manipulator’s behavior by observing the change in the balance of the most relevant address. Manipulation scheme The possibility of pushing Tether into the market gives rise to a simple price inflation scheme that can be placed into the category of pump-and-dump schemes. However, as will be explained later, it is even more “powerful” for dimensions in which the profit is generated. In its procedural essence, this scheme can be viewed as an algorithm, and its outline is visualized in Fig. 1 (note that in the real world, many more possibilities of action come into play depending on the circumstances, and the whole scheme can be much more complicated). The strategy of price inflation mostly relies on the assumption that the market will respond with positive feedback (inflow of buy orders) as a consequence of the Bitcoin buy orders executed by the fraudulent trader. Once the positive trend of the market price is established and sustained, the trader”s cash buffer can be refilled if needed, which means that there will be enough cash for the EoM statements to be satisfied. In principle, the positive feedback assumption is unnecessary because a long position is built up even if the market reacts negatively. However, in that case, an additional source of dollars to cover up the EoM statements would be needed; that is, an initial capital or a risk-bearing third party would have to be involved. Then, the trader can sustain the long position and wait until the market conditions are more favorable to restart the scheme. The profits generated by the scheme in the case of a positive response must be understood in two ways. First, to increase the value of Bitcoins, the fraudulent trader already has possession by triggering the inflow of new buyers. This is the main similarity to the pump-and-dump schemes. Second, as a way to obtain “free” Bitcoin. If the price increased sufficiently, the fraudulent trader would sell smaller amounts of Bitcoins for Dollars than the amount bought with Tether to cover the EoM statements; thus, there will be a surplus of Bitcoins. The crucial question that the fraudulent trader needs to address is deciding on the selling strategy. One plausible strategy would be to pump the price as high as possible and then sell a sufficient amount of Bitcoin by executing a sequence of sell orders a few days before the date of the EoM statement publication. For the reasons explained in later sections, we believe it is cost-effective if the sequence consists of very small sell orders; in this way, the liquidation process takes advantage of high liquidity near the current price, but it can also be harder to notice by the rest of the market participants, and so the price should not drop too drastically. The liquidation strategy via a sequence of small sell orders can be further enhanced by executing small sell orders on multiple exchanges. This would make it more challenging to trace the liquidation process; indeed, though the study of Griffin and Shams (2019) performs an analysis of the outflow from Bitfinex reserves during the times concurrent with the publication of the EoM statements, the question of where these flows end remains unanswered. Fig. 1 Price inflation scheme. Unbacked Tether is issued and pushed into Bitcoin market. The fraudulent trader must have enough cash to cover the EoM statements Volume anomalies In Griffin and Shams (2019), it was concluded that Tether flows from suspicious addresses are correlated with the price increase. We extend these observations in the context of volume and influence on other traders. We argue that, it should be possible to see evidence of fraudulent traders selling their unlawfully obtained Bitcoins in the traded volume to satisfy the EoM statements. Indeed, if the fraudulent trader has an incentive to sell large amounts of Bitcoins within a span of a few days shortly before publishing the EoM statement, or at least somewhere around that time, it is expected that the volume in this time span would temporally increase both directly on the exchanges where the selling takes place and secondarily as a response of other traders reacting to increased amounts of sell orders. In both cases, such actions must be visible in the total Bitcoin trade volume and several large exchanges’ volumes. Data collection As the trade volumes of Poloniex and Bittrex were several times higher trade volumes than other large exchanges such as Coinbase or Bitflyer, we have decided not to use this data, as they probably experienced wash trading. Instead, we used traded volume data from exchanges that obtained a Bitlicense (Chohan 2018) issued by the New York State Department of Financial Services or had similarly reported volumes. We downloaded the volume data from https://data.bitcoinity.org and aggregated the trade volume of trustworthy exchanges (Bitfinex, Bitflyer, Bithumb, Bitstamp, Coinbase, and Kraken) and the total volume of other smaller exchanges. If Poloniex and Bittrex volumes were not artificially increased, we would naturally use their volumes for model validation; however, this was not the case. For this reason, we need to define what will be our reference exchange, which will serve as a reference when analyzing the simulations, to estimate how much influence the fraudulent agent has in terms of traded volume. We then take the volume data of trustworthy exchanges from the same source and take the averages over daily values. As the fraudulent agent was active on two exchanges, we multiply the averages by two. Fig. 2 Aggregated volume with highlighted end of month events and large scale events Data analysis Figure 2 reports the resulting aggregated volume. The red bars correspond to the fraudulent agent supposedly liquidating some of the Bitcoins to satisfy the schema in Fig. 1. We will refer to the days when the liquidation process takes place as EoM events because the chosen days generally correspond to the end-of-month statements published by Tether Limited on the 15th of every month. As the fraudulent trader likely had some initial capital, these days do not have to correspond exactly to the 15th of every month. The general pattern is that these spikes tend to occur every 2 months. As can be seen from Table 1, especially in July, September, November, and January, the liquidation process seems to be matching the 15th day of the month very well. Additionally, we hypothesize that the blue and green bars in Fig. 2 correspond to the market responding to an increase or decrease in price as a consequence of actions performed by the fraudulent trader. The blue bars correspond to the volume increase due to an increase in buying, and the green bars correspond to an increase in selling. We refer to these days as large scale events (LSE). A possible explanation for these events is that some investors entering or leaving the market temporally increased the volume, triggering a secondary response from other traders. However, the true reason behind these volume anomalies remains an open question. Given the uncertainty and as this study aims to focus on the modeling of a fraudulent trader, we will not attempt to model LSEs as actions of some specific agents, but we will assume them in the simulation as prior knowledge (exogenous events). Inter-exchange influence and liquidity Before we start building the agent-based model of the market, it is important to discuss our assumption that influencing the price on two exchanges is sufficient to influence the market price across all other exchanges. The direct way in which one exchange can influence the price is by trading large volumes of Bitcoin. Most web services that report the price of Bitcoin calculate the price as an average over the last traded price on several exchanges, weighted by the traded volume. These services must have a way of detecting wash trading, but they can hardly filter out a fraudulent trade, such as the one described in previous sections. Therefore, if seemingly legal fraudulent trades of large volumes are executed on one exchange, then the reported price will be skewed by the activity of this exchange, diminishing the influence of the other exchanges. It is clear that if fraudulent buy orders are matched with sell orders with high limit prices, the calculated Bitcoin market price will consequently be pushed higher than the average price traded on other exchanges. A second way the activity on one exchange can influence the whole market is by traders observing price fluctuations on multiple exchanges and generating a profit by taking advantage of these small price differences. It was concluded in Chordia et al. (2008) that such an arbitrage activity, if stimulated by sufficient liquidity, results in higher price efficiency, which, in turn, results in a more stable market price unless new external information enters the market. However, in Marshall et al. (2018), analyzing a database of Bitcoin intraday data on 14 exchanges, including prices of 13 currencies, it was observed that cryptocurrency markets tend to be illiquid and hence less price-efficient. This means that there is a lower overall agreement on the price of Bitcoin. From this, it can be concluded that the variations in price across all major exchanges, given the low liquidity of Bitcoin, can increase price volatility. Indeed, in the same study, evidence shows that an increase in illiquidity corresponds with an increase in crash risk across all pairs when liquidity proxies are either the effective spread or price impact. This volatility–liquidity relationship was confirmed by some studies (Næs and Skjeltorp 2006; Tripathi et al. 2020; Valenzuela et al. 2015) from a quantitative point of view. Based on this argument, one might expect ascendancy among different cryptocurrency exchanges. The earliest study to investigate this question is Brandvold et al. (2015). This study discusses a leader–follower relationship between various exchanges, linking them to specific events regarding Chinese government policies or the arrest of the Silk Road black market owner (October 2, 2013). Interestingly, the Mt. Gox exchange was identified to have a large but decreasing information share in the market; however, during the period concurrent with the price manipulation period described in Gandal et al. (2018), the Mt. Gox exchange again established its dominant position in the market. This is not only consistent with previous arguments and provides an early example that manipulative behavior on one exchange can influence the price of the entire market. In conclusion, illiquidity and low agreement among traders about the price of Bitcoin create favorable conditions for a manipulation scheme to be executed successfully. In later sections, we extend the discussion on illiquidity in greater detail, showing that the way liquidity is distributed in the order book can provide an essential advantage for the fraudulent trader. Exchange model The level of granularity assumed for our investigation is a limit order book model in which orders are placed in a public order book. An order can be entered every second in the order book in cryptocurrency exchanges. In our exchange model, the orders can enter every minute to simplify processing, which means that each trading day d consists of \(T=1440\) tics (minutes). We use the time index t to measure the time in the model in minutes, and we use the time index \(\tau\) to measure the time in days; for example, \(p_{t}\) denotes the price at time t, and \(p_{\tau }\) denotes the price at the end of a trading day \(\tau\). Limit order book market model The market environment is based on the model presented in Raberto et al. (2005). Each trader can observe the order book \(O_t\) at time t; that is, a table consisting of 5 columns: order type, Bitcoin amount, residual amount, limit price, issue day, and expiration day. With respect to the limit price, the buy orders are sorted in descending order and the sell orders are sorted in ascending order. Issue time is the second sorting criterion when the limit prices are equal. Each trading day is split into T tics during which traders can issue orders. If the issue day exceeds the expiration day, the order is removed from the order book. Market ordersFootnote 6 by setting the limit price to zero. At the time t, we denote \(B_j[O_t]\) as the limit price of the j-th buy order, and \(S_i[O_t]\) as the limit price of the i-th sell order. The sell order of index i and the buy order of index j are matched if and only if \(S_i[O_t] \le B_j[O_t]\). The order-matching mechanism is defined as follows: Every time a new order enters the order book, the first sell and buy orders are inspected if they satisfy \(S_i[O_t] \le B_j[O_t]\), and the new market price is decided according to the order-matching mechanism. As more than one order can be issued at time t, the last match at time t is the current price \(p_{t}\). We do not consider expiration times within a minute during the simulation because this would unnecessarily complicate the model. Expiration time, price and amount distributions One factor that determines the price and crucial property of every exchange is the order book depth. In principle, the order book depth is defined by the distribution of Bitcoin amounts and the limit prices placed in the order book by traders. In our environment, almost all traders decide the Bitcoin amount and limit price by sampling these two values from predefined distributions, thus filling the order book with orders. Based on the findings presented in Schnaubelt et al. (2019), we hypothesize that four main empirical properties are relevant to our study. 1. broad hump-shaped (bimodal) distribution of limit prices; 2. quickly rising transaction costs; 3. relatively small volume concentrated around the mid-price, compared to total volume provided by the order book; 4. both sides of the order book are on average symmetric with respect to the mid-price. We assume that the limit price and Bitcoin amount distributions are independent for simplicity. We assume that the bimodal shape of the limit price distribution is due to a mixture of two distributions. The first component is modeled by a Gaussian distribution \(N(\mu ,\sigma )\), with mean \(\mu\) and variance \(\sigma\). The second component, representing the tail of the limit price distribution, is modeled by a beta distribution \(Beta(\alpha ,\beta )\), where \(\alpha ,\beta\) are the shape parameters. To produce an on average symmetric distribution, the limit price in the former case is defined as \(p_t \cdot N(\mu ,\sigma )\) for buy orders and \(\frac{p_t}{N(\mu ,\sigma )}\) for sell orders. For the tail, we must introduce two additional parameters a, c: the location parameter a and the scale parameter c (Johnson et al. 1995). Then, the limit price of orders placed deeper into the order book is for buy orders: The second component defining market depth is the amount distribution. As we mainly control the transaction costs using the limit prices, the amount distribution is less important, but we will attempt to make it realistic nonetheless. Several characteristic properties of the amount distribution were observed empirically (Cong et al. 2020). The main characteristic to be captured is the bias of traders to certain “round” values, such as \(0.5,1,1.5,2,\dots\). We construct this distribution as a mixed discrete/continuous distribution consisting of a Poisson distribution and an exponential distribution of the form: Finally, the expiration time of an order influences the distribution of limit prices and amounts over time. Similar to Cocco et al. (2017), we use the floor value of the log-normal distribution with the parameters \(\mu _{L},\sigma _L\). In the simulation, we set these parameters to relatively low values because it seems plausible to assume that traders will be cautious in keeping any order in the order book for too long, given the uncertainty about the Bitcoin price. In addition, we assume independence between the expiration time conditioned on price and amount. Agent models The success of a scheme used by the fraudulent trader depends on the response of the market. Therefore, we discuss the market response model or market response agents when referring to the response of the market to the actions of the fraudulent agent (FA). Market response agents Random agents Random agents (RAs) are issuing buy or sell orders with equal probability and hold with probability \(1-P_{RA}\). The limit price is sampled from the Gaussian component defined above. Random speculative agents Random speculative agents (RSAs) are issuing buy or sell orders the same way as RAs. The limit price is sampled from the Beta distribution according to the Eqs. (1a) and (1b), which means the limit prices of their orders are relatively far away from the mid-price. Therefore, the RSA speculates that even orders placed deeper in the order book will be matched given the market’s volatility. The probability that the RSA will hold is \(1-P_{RSA}\). Chartist agents Chartist agents (CAs) are observing the average of Bitcoin returns in the window \([\tau -l,\tau ]\) over which the average is taken. The probability that a CA will issue order is \(P_{CA}\). If the average return is positive, the CA issues a buy order; otherwise, a sell order. The limit price is sampled from the Gaussian component. CAs are active if the market price is above $50, and they follow their initial strategy until the price reaches $20000. Subsequently, the CA will decide with probability \(Q_{CA}\) to issue a sell order and with probability \((1-Q_{CA})P_{CA}\) to continue the initial trend-following strategy. Parameter \(Q_{CA}\) can be interpreted as the CA belief that the price will drop after reaching its presumed maximum. If the price happens to decrease to $10000, the CA will return to a pure trend following [for this threshold price approach, see, for instance, Lee and Lee (2021)]. Fraudulent agent In principle, the fraudulent agent behavioral script is defined by the buying and selling schedules. The buying schedule is constructed directly from the available data on Tether outflows. The selling schedule is constructed following the discussion in previous sections, considering the empirical findings related to Bitcoin order book liquidity. Cash matrix A cash matrix C(t) defines the amount of cash that the FA will use to issue a buy order on a given day and minute. Using this capital, the FA calculates the amount of Bitcoins to buy from the order book and then issues a market order. Let us define \(b_t\) as the amount of Bitcoin the FA has in possession at time t. The amount of Bitcoin to be obtained at \(b_{t+1}\) depends on the available cash allocated in the cash matrix and the state of the order book. The cash matrix was constructed from the amounts of Tether sent from the 1J1d and 1AA6 addresses, as identified in Griffin and Shams (2019), spanning 1 year and 3 months from January 1, 2017, to March 1, 2018. Ninety percent of Tether flows from Bitfinex to Poloniex go to the 1J1d deposit address, and 72% of Tether flows from Bitfinex to Bittrex go to 1AA6.Footnote 7 If we identify one Tether with one USD, ignoring negligible fluctuations in the price of Tether, then these flows provide a compelling picture of the FA’s capital. As the timescale of the model is minutes per day, the Tether flows are aggregated per minute. As the market model is a scaled-down model of an exchange, the cash matrix also needs to be scaled down, which is done by multiplying the cash matrix element-wise with the scalar parameter s. Selling strategy The selling strategy is a strategy of the FA to liquidate a portion of the Bitcoins to refill the cash buffer and then satisfy the EoM statements. We claim that these selling days roughly correspond to the date when EoM statements are published by Tether Limited, which is the 15th of every month, but the FA does not need to meet this deadline strictly, given that the FA most likely have backup capital available. Although there are no strict consequences for the FA for not fulfilling the obligations in the model environment, we assume that if \(b_t < 0\) at any point in time, the FA will exit the market to maintain a long position on the obtained Bitcoins. The exit of the FA typically occurs when the market response is not sufficiently positive, and the price is too low for the FA to regain capital by selling Bitcoins. If everything goes as planned, the FA will sell a small amount of Bitcoins every minute by issuing a limit sell order, decreasing the number of Bitcoins \(b_t\) that the FA has in possession at time t. As the order book is relatively liquid near the mid-price, it is logical for the FA to issue only small sell orders and avoid large sell orders because of the rapid increase in transaction costs. Thus, the FA aims to obtain a fraction \(\frac{c_i}{1440}\) of the total cash that was used to obtain Bitcoins, where \(c_i\) are the coefficients in Table 1, telling us how much of the cash is planned to be obtained on a specific day. The coefficients are calculated from empirical data by taking the values of the traded volume and dividing each value by a normalizing constant. For instance, if the traded volume on September 14 was 484601.8 Bitcoins and September 15 was 705641.0 Bitcoins, to obtain the coefficients, each value is divided by the sum \(484601.8 + 705641.0\); thus, \(0.4071453 + 0.5928547 = 1\). This means that on September 14, the FA plans to obtain \(40.7\%\) and the following day \(59.3\%\) of the capital deficit present in the cash buffer. Table 1 List of EoM events and amount of cash planned to obtain at given day in order to cover the expenses incurred by buying Bitcoin Large scale events Volume anomalies that do not seem to be related to the actions of the FA are regarded as LSEs. While it might be possible to model these spikes in traded volume as actions of certain types of agents, we take an easier path of using the information present in the traded volume data. The dates in which LSEs occurred are extracted from Fig. 2 and listed in Table 2, together with a hypothesis on whether an LSE consisted of predominantly buy or sell orders, which is not possible to read from volume data alone, but can be assumed depending on the trend in the market price.Footnote 8 This means that, in addition to standard trading activity during one day, an increase in trading activity is arranged by issuing more orders to reproduce the green and blue volume anomalies in Fig. 2. The magnitude of an LSE is defined by the number of orders issued on a given day, and the amount of Bitcoin bought or sold per order. As we do not have data records related to LSE events, we make the simplifying assumption that the orders during one LSE day arrive with a frequency f to trade amount \(\rho\); that is, every f minutes a new market order is issued to buy or sell \(\rho\) Bitcoins. Additionally, depending on the exact date, the amount \(\rho\) is multiplied by a scaling factor such that the volume anomaly during the simulation matches the empirical volume anomaly. The scaling factors are listed in Table 2. Table 2 List of large scale events associated with volume spikes, that are not explained by EoM events Experiments and results To demonstrate the essential influence of FA on the market, four simulation experiments are presented: 1. Non-manipulated scenarios: (a) Base scenario (b) Susceptible scenario (c) Susceptible scenario with large scale events 2. Manipulated scenario. Thus, the market price time series can be decomposed in terms of activity of agents. To ensure that the results are consistent in all three scenarios:, the model parameters are kept the same as listed in Table 3, except for setting parameters defining the activity of excluded agents or events zero for each of the first three scenarios. In non-manipulated scenarios, the market price time series is the central quantity that provides information on the behavior of the underlying system. In the manipulated scenario, three more quantities related to the activity and influence of the FA are measured along with the price. These quantities are: The Market Price generated by the model is compared to the Bitcoin market price. The Volume generated by the model is compared to the reference exchange as defined in the section on volume anomalies. Both empirical and simulated volumes were normalized for comparison on the same scale. The Inflow of Bitcoin as obtained by the FA during the simulation is compared to the inflow of Bitcoin to the 1LSg address. As in the case of volume, both the empirical and simulated inflows were normalized. The Relative Influence of the FA is defined as the ratio of Inflow of Bitcoin and the Volume. In this case, normalization is not needed. Empirical data from January 1, 2017, to March 1, 2018, are used to calibrate the model parameters, and the results are visualized for each scenario (Abel 2015). Some parameters in Table 3 were predefined based on empirical findings (see the “Discussion” section), and the rest of the parameters were calibrated using stochastic simultaneous optimistic optimization algorithm (Valko et al. 2013), except for parameter l, which was calibrated manually. More details about the calibration can be found in the “Appendix”. Fig. 3 Simulated market price time series in terms of activity of agents or presence of large scale events. Base scenario with only random agents and random speculative agents; susceptible scenario including Chartist agents; and susceptible scenario with large scale events included in the simulation. The green line is the median price with 20th, 50th and 95th prediction interval Non-manipulated scenarios In the base scenario, we set \(\rho = s = P_{CA} = 0\), which means that the FA and CAs are not active, and the scaling factor of the additional amounts bought or sold during the LSEs is multiplied by zero. In the susceptible scenario, the CAs are active and issue orders with a given probability. We refer to this scenario as “susceptible” because, contrary to the base scenario, the market with CAs is prone to large price fluctuations. However, as will be apparent from the simulations, even if LSEs are included, the probability of a price reaching $20000 is rather unlikely. Fig. 4 Histograms related to non-manipulated scenarios. In subfigure (a) the histogram of p-values of Augmented Dickey-Fuller test calculated for each simulation of the base scenario is plotted with a red dashed line at value 0.05. In subfigures (b) and (c) the histograms of maximum values of the market price achieved during each simulation are plotted for susceptible scenario and susceptible scenario with LSEs, respectively Base scenario This is a scenario where the market is in an equilibrium state, which is intuitive to be expected because, with no speculation present on the market and a sufficient amount of liquidity on both sides of the order book, a large fluctuation in the price is improbable to occur. By calculating the p value of the augmented Dickey–Fuller test for stationarity for each simulation of the base scenario, we obtain a distribution of p values, as depicted in Fig. 4a. From this histogram, we can see that the alternative hypothesis of stationarity dominates. Susceptible scenario This scenario includes agents that are following the trend, and therefore one can expect larger price fluctuations to be observed. However, although in this case, the stationarity test did not provide evidence for stationarity, the price time series is considerably “well-behaved.” Indeed, if we look at the histogram of the maximum values (Fig. 4b), there is only a minimal number of simulations that are capable of surpassing the $10000 Bitcoin price. Susceptible scenario with large scale events This scenario includes both the speculative behavior of the CAs and disturbances in the form of LSEs. As shown in Fig. 3, the mean value of the price temporarily shifts before the LSE sells orders to lower the price to its long-term value. Overall, this disturbance is insufficient to produce an increasing trend, even when CAs are present. Manipulated scenario In this scenario, the FA is active during the simulation, and all parameters are set as shown in Table 3. In Fig. 5, we can see the consequences of the presence of FA compared with the non-manipulated scenarios visualized in Fig. 3. The influence of EoM events is visible on the price time series and, together with LSEs, form spikes in the volume. Typically, the FA decides to hold a long position in 20–25% of the cases. The trajectories of these unfinished manipulation attempts are excluded from the figures because they represent a different market regime that needs a different dataset to be validated. Fig. 5 Simulated market price and market volume with Fraudulent agent included during the simulation, along with the large scale events and all the agents of the response model. The empirical data (blue) are plotted against simulated median (green) with 20th, 50th and 95th prediction interval If everything goes as planned, the FA buys Bitcoins using allocated cash in the cash matrix, as shown in Fig. 6, where simulated Bitcoin inflows measured in the model are plotted against the inflow of Bitcoin into the 1LSg address. It can be seen that the Tether outflow encoded in the cash matrix is produced via the market simulation with almost the same Bitcoin inflow as that obtained from the real Bitcoin blockchain. By aggregating these simulated daily inflows, the Bitcoin balance \(b_t\) is obtained and displayed in Fig. 6, where sudden drops owing to EoM events are visible. The balance increases approximately linearly between the drops, and a surplus of Bitcoin is produced over a longer period. Note that the surplus was produced only by executing Scheme 1, and no resources (Tether or Dollar) were spent. In other words, other market participants paid a bill. Fig. 6 Time series detailing the behavior of the fraudulent agent with respect to empirical data (blue); compared to the simulated median (green) with 20th, 50th and 95th prediction interval Limit order book market robustness The liquidity of the order book is a strong predictor of the success of a scheme defined by Fig. 1. Increasing liquidity by increasing the number of orders issued by random agents using parameters \(P_{RA}\) and \(P_{RSA}\), or by increasing the amounts issued per order using parameters of the amount distribution, would be the most straightforward way to make the order book more liquid. In this case, assuming the FA would not adapt, the relative influence of the FA would decrease; thus, the market would be more resistant to manipulation attempts. Fig. 7 The maximal value of price time series averaged over 80 simulations is plotted against the parameter \(\alpha\) of the Beta distribution controlling the liquidity deeper in the order book What is perhaps less obvious is that not only the total amount of liquidity, but also the distribution of liquidity is a relevant factor. As noted previously, traders’ low agreement about the price of an asset is translated into the dispersion of the limit prices further away from the mid-price. Indeed, if traders agreed on the asset’s market price, they would put their orders much closer to the mid-price. More orders concentrated closer to the mid-price would result in lower transaction costs; therefore, the efficiency of the FA’s manipulation strategy should be lower. This hypothesis can easily be tested in our model environment. By increasing the parameter \(\alpha\), the orders with limit prices previously placed further away from the mid-price will now be placed closer to the mid-price because increasing the first shape parameter of the beta distribution, while keeping the second shape parameter equal to one, will move the mass of the density function toward the value of its location parameter a. This means that there are more orders with a limit price close to \((1+a)p_{t}\) for sell orders and close to \((1-a)p_{t}\) for buy orders. As shown in Fig. 7, by the increasing parameter \(\alpha\), the efficiency of the manipulation strategy decreases because the inflated price decreases. The consequence of the FA is that, despite buying more Bitcoin for the same amount of Tether, the price impact is lower because the FA’s buy orders do not match sell orders with as high limit prices, thus, changing the distribution of liquidity, in our case, by controlling the parameter \(\alpha\), has a similar effect as increasing the overall liquidity. Note that the parameter \(\alpha\) has little effect during EoM events because the FA sells Bitcoin in small amounts, matching buy orders near the mid-price. As the FA has a virtually unlimited amount of Tether to push into the Bitcoin market, it is possible to issue more Tether. However, this would increase the risk associated with the given manipulation scheme; thus, the fraudulent trader would need to increase the backup capital or default in case of insufficiently positive market response. Indeed, by increasing the parameter \(\alpha\) in our computational experiment, the number of FA defaults was higher. Furthermore, note that even if the FA manages successfully to execute the scheme, the profits would be lower, while the risk would increase. Discussion Methodological concerns In the present work, the design of the model follows an incremental strategy, increasing the complexity until a sufficiently good fit to the empirical data is obtained.Footnote 9 This approach is well suited to this case study because the essential importance of the FA was demonstrated by decomposing the market price time series in terms of agents” activities. Given the high level of consistency of our assumptions with other empirical studies found in the economic literature and the satisfactory fit to empirical data related to the Bitcoin market, high confidence can be given to the modeling assumptions related to the principles behind the success of the manipulation scheme investigated in this study. Some of the parameter values in Table 3 were set to match the empirical observations of Bitcoin limit-order books (Schnaubelt et al. 2019). It was observed that orders are placed as far as \(50\%\) from the mid-price, so we set \(c=0.5\). The location of the local maximum in the hump-shaped average order book was observed to be approximately \(1\%\) from the mid-price. This fact is also reflected in the model by setting \(a=0.015\). The parameters of the amount distribution (2) were similarly predefined, considering the findings in Cong et al. (2020). The calibration results agree with known empirical observations. As the RA issuing an order is higher than the probability of the RSA, most of the liquidity will be located near the mid-price. However, due to the relatively low value of the \(\alpha\) parameter, it is still possible to observe orders further away from the mid-price, which is again in agreement with the findings in Schnaubelt et al. (2019). Although the model implements several realistic assumptions, many simplifications cause higher prediction errors. For instance, for the reasons described in the section discussing volume anomalies we deem a plausible assumption, that it was sufficient for the fraudulent trader to influence the price on Poloniex and Bittrex, which means that to model a manipulation on the entire Bitcoin market, it should be sufficient to model the manipulation using only one order book. However, such a simplification is not sufficient to fully consider EoM events. If the FA can liquidate Bitcoins on multiple exchanges in small amounts, then this process is more price-efficient than liquidating on a single exchange. This means that the influence of the real fraudulent trader could have been even slightly higher, and thus the parameter s is probably underestimated. The simulated data did not produce very good results, especially from the end of May until the end of July, roughly between the 2nd and 3rd EoM events. The activity of chartist traders likely depends on the average returns and the Bitcoin market value, which means that the CA ought to be less active if the price is low. This is not the case in the model because parameter \(P_{CA}\) is constant. Moreover, to obtain a better fit for the empirical data, it would be necessary to include the flows from the dominant Tether addresses and the flows from all Tether addresses controlled by the fraudulent trader. It is also possible that the fraudulent trader followed a less aggressive selling strategy prior to the third EoM event and started the liquidation process before July 14, 2017. In the fragrant Bitcoin market, it is challenging to correctly identify the reasons behind some of the insufficiencies present in our model because even actions with negligible influence on the price in more liquid markets can significantly influence illiquid Bitcoin market. Regulatory implications The economic understanding going with the proposed model has important implications for the contemporary cryptocurrency market. A regulation where stablecoin providers must prove their capital not only once a month but in a much shorter time period is highly desirable to protect—the customers of these providers and other participants in the market—from being misled into a pump-and-dump scheme. Policymakers are slowly catching up with the industry in terms of legislative regulation. The European Union Commission proposed and agreed on a legal framework for cryptocurrencies,Footnote 10 especially targeting stablecoins in their “Regulation on Markets in Crypto Assets” proposal. In the U.S., President Biden’s administration has also recently taken a proactive stand on stablecoin regulation.Footnote 11 Individual governments can decide the strength of regulations in agreement to their long-term strategy and consider the consequences of their decisions concerning innovation. These decisions can be effectively implemented at the domestic level; however, there might be an incentive to avoid regulations in the case of exchanges, as they can pose the risk of a decrease in traded volume or engage in illicit behavior. In addition to the legislative regulations implemented in various countries, a different self-regulatory approach can be adopted. Regulations to protect the stability of a market by restricting trading mechanisms are already in place on FOREX markets, for instance, constraints on the maximum amount issued by one order, a maximum number of orders of a trader per day, or maximum limit price. Some of these simple restrictions have already been implemented on more regulated exchanges, such as Huobi or Coinbase. Another more invasive intervention is circuit breakers such as price limits or trading halts (Sifat and Mohamad 2019). These regulations would make it more challenging to facilitate manipulative activities but might be perceived as too restrictive, slowing down the sector’s growth. Following the discussion on Bitcoin limit order book market robustness, we can target a dynamic approach to prevent market manipulation without affecting daily trade traffic. Having a better understanding of how liquidity is linked to market manipulation, an exchange can implement a market surveillance system (Cumming and Johan 2008) to inspect liquidity distribution in real-time and predict the market impact of an issued order (Gu et al. 2008; Weber and Rosenow 2005). Then, the exchange can refuse to accept an order if there is suspicion that the order aims to create a sudden increase or decrease in the market price. Moreover, exchanges can search for fraudulent behavioral trading patterns in the order books, directly on the blockchain, in aggregated statistics, or even on public forums, and then evaluate the risk of the trading behavior being associated with fraudulent activity and either intervene by refusing to accept orders or report the suspicion to a relevant authority. As identified in this study, the typical (volume) pattern of Scheme 1 is manifested in approximately periodic spikes in the traded volume. A well-designed monitoring system should be capable of detecting suspicious addresses that repeatedly issue buy orders with a relatively high predicted market impact on a few specific exchanges, followed by high Bitcoin liquidation in roughly periodic intervals on some different exchanges, thus probably engaging in the execution of Scheme 1. It is likely that if such a monitoring system were implemented, the manipulation following Scheme 1 would be ineffective. The advantage of the approach described above is that, on the blockchain, all transactions are public and immutable. Any monitoring system can access the full transaction history, which is usually not the case in traditional finance. This property offers, in principle innovation potential for sophisticated self-learning AI models to oversee market behavior. These models can be trained on historical datasets or simulated environments capable of reproducing fraudulent patterns, such as those presented in this study. However, one must be aware of the possible limitations that often arise from the adversarial nature of these systems. Therefore, present detection tools, therefore, might not be powerful enough to deal with more sophisticated fraud schemes, and more studies need to be done in this area. While it is true that the clear benefit for the exchanges in implementing regulatory systems to reduce or inhibit market manipulation would stabilize the market, this might be challenging to achieve without an overarching authority. Moreover, as to a certain extent exchanges benefit of fraudulent behavior, there might not be enough incentives to combat fraud: the short-term benefits of the current state of affairs may be more appealing than the long-term benefits of a reliable medium of exchange. For instance, in Kim et al. (2021), the effectiveness of money laundering reporting through exchanges is questioned. This study assumes that exchanges benefit from money laundering; reporting suspicious transactions can increase money laundering activity. One must be aware that a similar situation can occur when dealing with market manipulation. It can be argued that one of the main reasons for the widespread popularization of Bitcoin was the price increase orchestrated in 2017. Even though the exchanges likely knew about the issue,Footnote 12 as apparent both from the statistical evidence presented in Griffin and Shams (2019) and EoM events reconstruction by our model, the manipulation continued. Conclusion and further research directions It was demonstrated that introducing a fraudulent agent with a price manipulation strategy could create a price bubble that would not occur or would occur only with practically zero probability. The model can also explain several quantitative phenomena. Most anomalies, such as dips in the market price or spikes in the market volume during 2017 and the beginning of 2018, were connected to the end-of-month statements of Tether Limited. We hypothesize that the remaining anomalies can be explained by the inflow of new investors in response to the positive trend in market price due to price manipulation. Additionally, the efficiency of a price manipulation scheme was connected to several studies on order book liquidity and price formation. Dependency on the shape of the liquidity distribution is discussed and demonstrated computationally. The results of our model provide important insights to further the understanding of exchange manipulation with possible impacts on the entire market. These findings can be fruitful for policymakers and regulators when designing suitable countermeasures against market abuse. In addition, the proposed countermeasures can be tested in a simulated environment, such as the one presented in this study or one similar to ours, going in the promising direction of deep integration of distributed ledger technologies and artificial intelligence. These research directions may be closely related to study-contingent economic arrangements or experimental financial instruments. Should a decentralized monetary system work; it seems essential to implement a set of regulations that prevent manipulation attempts, or at least make it more challenging to apply them successfully. This model can be extended in several ways. The two most obvious extensions are to use full information from the addresses related to the market manipulator, as in Griffin and Shams (2019), or to use detailed order book data, as in Schnaubelt et al. (2019), but directly for the exchanges involved. Combining the datasets of these two studies with our model can potentially remove some of the remaining misalignments and provide a better fit for market price, relative volume, and realized inflow. Furthermore, a more sophisticated approach can be adopted when designing the fraudulent agent and the response agents, a choice that would include more complex behavioral rules and allow the agents to be active on several exchanges. In particular, the fraudulent trader should be enabled to observe and act upon the liquidity situation in the order book, the response of the market, and the possible market abuse countermeasures that may be included in the simulated environment. Finally, if a sufficiently rich market model is attained, the knowledge and understanding obtained by analyzing its function can be used to update the trading infrastructure of Bitcoin. The methodology developed in this research area has the potential to be further generalized and applied to another novel economic and financial infrastructures. Notes For a short introduction to the most known cryptocurrency, the Bitcoin, we refer to Böhme et al. (2015) and for an overview on others we refer to Berentsen and Schär (2018). For a review and more examples, we refer to Badawi and Jourdan (2020). Defined as the premium a trader has to pay to liquidate a given amount of assets. For example in Raberto et al. (2005) a Gaussian assumption is employed, which is also used in cryptocurrency setting (Cocco and Marchesi 2016; Cocco et al. 2017, 2019). Several studies relaxed the Gaussian assumption with either a log-normal assumption (Bartolozzi 2010), or a power-law assumption (Cui and Brabazon 2012; McGroarty et al. 2019). Lee S, Lee K (2021) 3% Rules the market: herding behavior of a group of investors, asset market volatility, and return to the group in an agent-based model. J Econ Interact Coord 16(2):359–380. https://doi.org/10.1007/s11403-020-00299-x Contributions PF identified the research question, designed and implemented the model, did literature review, acquisition of data, analysis of empirical data, model calibration and analysis of simulation output. GS and SK contributed with supervision, review, and editing. TvE contributed with supervision and review. All authors read and approved the final manuscript. Corresponding author Ethics declarations Ethical approval and consent to participate This article does not contain any studies with human participants or animals performed by any of the authors. Competing interests The authors declare that they have no competing interests. Additional information Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Appendix: simulation and calibration details Appendix: simulation and calibration details In principle, we are interested to find such values of model parameters that provide a good fit to the price time series and do not overestimate the influence of exogenous elements such as the activity of FA or the magnitude of LSEs. This means that the accuracy of the model needs to be defined either as a multi-objective function, or as a single-objective function that sums weighted components of the multi-objective functions, where: the first component measures the error between generated and empirical market price; the second component measures the error between generated and empirical market volume; the third component measures the error between generated and empirical relative volume. In the optimization routine we choose the simpler weighted option. Furthermore, if the empirical and generated market volume is standardized, then the volume peaks already provide an information about the influence of the FA (through EoM events) and the influence of the LSEs both relative to the spikes in empirical volume. This means that by measuring the distance between the generated and the empirical volume during the EoM or LSE days, we already impose a penalty if the algorithm would decide to overestimate the influence of the EoM and LSE related parameters, namely s and \(\rho\). Therefore the objective function measuring the accuracy of the model for the parameter vector \(\theta = (\sigma , \alpha , P_{CA}, Q_{CA}, P_{RA}, P_{RSA}, s, \rho )\) with predefined values for \(\mu , \beta , a, c, q, \lambda _P, \lambda _E, l\) can be simplified to: This provides a compromise between complexity and accuracy. The weight is \(w = 400\). The symbols \({\tilde{p}}_{\tau }\) and \({\tilde{u}}_{\tau }\) denote the median time series taken over a collection of 16 trajectories of generated price and volume respectively, in order to counter the stochasticity of the model output. Most of the parameters of the model are relatively sensitive and since the response model agents do not have bounds on available capital, certain parameter configurations can cause the market price to grow exponentially, or to decline basically to zero. This extreme behavior mainly depends on the value of the parameter l. The parameter l and the bounds on the parameter vector \(\theta\) were decided during the initial exploration of the simulation output. The bounds on the parameter \(\theta\) are listed in table 4. Rights and permissions Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
It is likely that if such a monitoring system were implemented, the manipulation following Scheme 1 would be ineffective. The advantage of the approach described above is that, on the blockchain, all transactions are public and immutable. Any monitoring system can access the full transaction history, which is usually not the case in traditional finance. This property offers, in principle innovation potential for sophisticated self-learning AI models to oversee market behavior. These models can be trained on historical datasets or simulated environments capable of reproducing fraudulent patterns, such as those presented in this study. However, one must be aware of the possible limitations that often arise from the adversarial nature of these systems. Therefore, present detection tools, therefore, might not be powerful enough to deal with more sophisticated fraud schemes, and more studies need to be done in this area. While it is true that the clear benefit for the exchanges in implementing regulatory systems to reduce or inhibit market manipulation would stabilize the market, this might be challenging to achieve without an overarching authority. Moreover, as to a certain extent exchanges benefit of fraudulent behavior, there might not be enough incentives to combat fraud: the short-term benefits of the current state of affairs may be more appealing than the long-term benefits of a reliable medium of exchange. For instance, in Kim et al. (2021), the effectiveness of money laundering reporting through exchanges is questioned. This study assumes that exchanges benefit from money laundering; reporting suspicious transactions can increase money laundering activity. One must be aware that a similar situation can occur when dealing with market manipulation. It can be argued that one of the main reasons for the widespread popularization of Bitcoin was the price increase orchestrated in 2017. Even though the exchanges likely knew about the issue,Footnote 12 as apparent both from the statistical evidence presented in Griffin and Shams (2019) and EoM events reconstruction by our model, the manipulation continued.
yes
Cryptocurrency
Can Bitcoin and other cryptocurrencies be manipulated easily?
yes_statement
"bitcoin" and other "cryptocurrencies" can be "easily" "manipulated".. manipulating "bitcoin" and other "cryptocurrencies" is easy.
https://www.simplilearn.com/tutorials/blockchain-tutorial/blockchain-technology
What is Blockchain Technology? How Does Blockchain Work ...
Table of Contents Over the past few years, you have consistently heard the term ‘blockchain technology,’ probably regarding cryptocurrencies, like Bitcoin. In fact, you may be asking yourself, “what is blockchain technology?” It seems like blockchain is a platitude but in a hypothetical sense, as there is no real meaning that the layman can understand easily. It is imperative to answer “what is blockchain technology, “including the technology that is used, how it works, and how it’s becoming vital in the digital world. As blockchain continues to grow and become more user-friendly, the onus is on you to learn this evolving technology to prepare for the future. If you are new to blockchain, then this is the right platform to gain solid foundational knowledge. In this article, you learn how to answer the question, “what is blockchain technology?” You’ll also learn how blockchain works, why it’s important, and how you can use this field to advance your career. What Is Blockchain Technology? Blockchain is a method of recording information that makes it impossible or difficult for the system to be changed, hacked, or manipulated. A blockchain is a distributed ledger that duplicates and distributes transactions across the network of computers participating in the blockchain. Blockchain technology is a structure that stores transactional records, also known as the block, of the public in several databases, known as the “chain,” in a network connected through peer-to-peer nodes. Typically, this storage is referred to as a ‘digital ledger.’ Every transaction in this ledger is authorized by the digital signature of the owner, which authenticates the transaction and safeguards it from tampering. Hence, the information the digital ledger contains is highly secure. In simpler words, the digital ledger is like a Google spreadsheet shared among numerous computers in a network, in which, the transactional records are stored based on actual purchases. The fascinating angle is that anybody can see the data, but they can’t corrupt it. Learn the Ins & Outs of Software Development Why is Blockchain Popular? Suppose you are transferring money to your family or friends from your bank account. You would log in to online banking and transfer the amount to the other person using their account number. When the transaction is done, your bank updates the transaction records. It seems simple enough, right? There is a potential issue which most of us neglect. These types of transactions can be tampered with very quickly. People who are familiar with this truth are often wary of using these types of transactions, hence the evolution of third-party payment applications in recent years.  But this vulnerability is essentially why Blockchain technology was created. Technologically, Blockchain is a digital ledger that is gaining a lot of attention and traction recently. But why has it become so popular? Well, let’s dig into it to fathom the whole concept. Record keeping of data and transactions are a crucial part of the business. Often, this information is handled in house or passed through a third party like brokers, bankers, or lawyers increasing time, cost, or both on the business. Fortunately, Blockchain avoids this long process and facilitates the faster movement of the transaction, thereby saving both time and money. Most people assume Blockchain and Bitcoin can be used interchangeably, but in reality, that’s not the case. Blockchain is the technology capable of supporting various applications related to multiple industries like finance, supply chain, manufacturing, etc., but Bitcoin is a currency that relies on Blockchain technology to be secure. Blockchain is an emerging technology with many advantages in an increasingly digital world: Highly Secure It uses a digital signature feature to conduct fraud-free transactions making it impossible to corrupt or change the data of an individual by the other users without a specific digital signature. Decentralized System Conventionally, you need the approval of regulatory authorities like a government or bank for transactions; however, with Blockchain, transactions are done with the mutual consensus of users resulting in smoother, safer, and faster transactions. Automation Capability It is programmable and can generate systematic actions, events, and payments automatically when the criteria of the trigger are met. Structure and Design of Blockchain A blockchain is a distributed, immutable, and decentralized ledger at its core that consists of a chain of blocks and each block contains a set of data. The blocks are linked together using cryptographic techniques and form a chronological chain of information. The structure of a blockchain is designed to ensure the security of data through its consensus mechanism which has a network of nodes that agree on the validity of transactions before adding them to the blockchain. Blocks: A block in a blockchain is a combination of three main components: 1. The header contains metadata such as a timestamp which has a random number used in the mining process and the previous block's hash. 2. The data section contains the main and actual information like transactions and smart contracts which are stored in the block. 3. Lastly, the hash is a unique cryptographic value that works as a representative of the entire block which is used for verification purposes. Block Time: Block time refers to the time taken to generate a new block in a blockchain. Different blockchains have different block times, which can vary from a few seconds to minutes or may be in hours too. Shorter block times can give faster transaction confirmations but the result has higher chances of conflicts but the longer block times may increase the timing for transaction confirmations but reduce the chances of conflicts. Hard Forks: A hard fork in a blockchain refers to a permanent divergence in the blockchain's history that results in two separate chains. It can happen due to a fundamental change in the protocol of a blockchain and all nodes do not agree on the update. Hard forks can create new cryptocurrencies or the splitting of existing ones and It requires consensus among the network participants to resolve. Decentralization: Decentralization is the key feature of blockchain technology. In a decentralized blockchain, there is no single central authority that can control the network. In decentralization,the decision-making power is distributed among a network of nodes that collectively validate and agree on the transactions to be added to the blockchain. This decentralized nature of blockchain technology helps to promote transparency, trust, and security. It also reduces the risk to rely on a single point of failure and minimizes the risks of data manipulation. Finality: Finality refers to the irreversible confirmation of transactions in a blockchain. If and when a transaction is added to a block and the block is confirmed by the network, it becomes immutable and cannot be reversed. This feature ensures the integrity of the data and prevents double spending, providing a high level of security and trust in Blockchain Types & Sustainability Openness: Openness in blockchain technology makes the blockchain accessible to anyone who intends to participate in the network. This implies that it is open for all and anyone can join the network, validate transactions, and can add new blocks to the blockchain, so long as they know the consensus rules. Openness promotes inclusivity, transparency, and innovation, as it allows for participation from various stakeholders. Public Blockchain: It is a kind of blockchain which is open for the public and allows everyone to join the network to perform transactions and to participate in the consensus process. Public blockchains are transparent, because all transactions are publicly recorded. How Does Blockchain Technology Work? In recent years, you may have noticed many businesses around the world integrating Blockchain technology. But how exactly does Blockchain technology work? Is this a significant change or a simple addition? The advancements of Blockchain are still young and have the potential to be revolutionary in the future; so, let’s begin demystifying this technology. Blockchain is a combination of three leading technologies: Cryptographic keys A peer-to-peer network containing a shared ledger A means of computing, to store the transactions and records of the network Cryptography keys consist of two keys – Private key and Public key. These keys help in performing successful transactions between two parties. Each individual has these two keys, which they use to produce a secure digital identity reference. This secured identity is the most important aspect of Blockchain technology. In the world of cryptocurrency, this identity is referred to as ‘digital signature’ and is used for authorizing and controlling transactions. The digital signature is merged with the peer-to-peer network; a large number of individuals who act as authorities use the digital signature in order to reach a consensus on transactions, among other issues. When they authorize a deal, it is certified by a mathematical verification, which results in a successful secured transaction between the two network-connected parties. So to sum it up, Blockchain users employ cryptography keys to perform different types of digital interactions over the peer-to-peer network. Learn the Ins & Outs of Software Development Types of Blockchain Private Blockchain Networks Private blockchains operate on closed networks, and tend to work well for private businesses and organizations. Companies can use private blockchains to customize their accessibility and authorization preferences, parameters to the network, and other important security options. Only one authority manages a private blockchain network. Public Blockchain Networks Bitcoin and other cryptocurrencies originated from public blockchains, which also played a role in popularizing distributed ledger technology (DLT). Public blockchains also help to eliminate certain challenges and issues, such as security flaws and centralization. With DLT, data is distributed across a peer-to-peer network, rather than being stored in a single location. A consensus algorithm is used for verifying information authenticity; proof of stake (PoS) and proof of work (PoW) are two frequently used consensus methods. Permissioned Blockchain Networks Also sometimes known as hybrid blockchains, permissioned blockchain networks are private blockchains that allow special access for authorized individuals. Organizations typically set up these types of blockchains to get the best of both worlds, and it enables better structure when assigning who can participate in the network and in what transactions. Consortium Blockchains Similar to permissioned blockchains, consortium blockchains have both public and private components, except multiple organizations will manage a single consortium blockchain network. Although these types of blockchains can initially be more complex to set up, once they are running, they can offer better security. Additionally, consortium blockchains are optimal for collaboration with multiple organizations. Hybrid Blockchains Hybrid blockchains are the combination of both public and private blockchains. In a hybrid blockchain, some parts of the blockchain are public and transparent, while others are private and accessible only to authorized and specific participants. This makes hybrid blockchains ideal for use in those cases where a balance is required between transparency and privacy. For example, in supply chain management multiple parties can access certain information, but sensitive data can be kept private. Sidechains Sidechains are different blockchains that run parallel to the main blockchain, allowing for additional functionality and scalability. Sidechains enable developers to experiment with new features and applications without affecting the main blockchain's integrity. For example, sidechains can be used for creating decentralized applications and to implement specific consensus mechanisms. Sidechains can also be used to handle transactions of the main blockchain to reduce congestion and increase scalability. Blockchain Layers Blockchain layers refer to the concept of building multiple layers of blockchains on top of each other. Each layer can have its own consensus mechanism, rules, and functionality which can interact with other layers. This ensures greater scalability, as transactions can be processed in parallel across different layers. For example, the Lightning Network, built on top of the Bitcoin blockchain, is a second layer solution that enables faster and cheaper transactions by creating payment channels between users. The Process of Transaction One of Blockchain technology’s cardinal features is the way it confirms and authorizes transactions. For example, if two individuals wish to perform a transaction with a private and public key, respectively, the first person party would attach the transaction information to the public key of the second party. This total information is gathered together into a block. The block contains a digital signature, a timestamp, and other important, relevant information. It should be noted that the block doesn’t include the identities of the individuals involved in the transaction. This block is then transmitted across all of the network's nodes, and when the right individual uses his private key and matches it with the block, the transaction gets completed successfully. In addition to conducting financial transactions, the Blockchain can also hold transactional details of properties, vehicles, etc. Learn the Ins & Outs of Software Development Hash Encryptions blockchain technology uses hashing and encryption to secure the data, relying mainly on the SHA256 algorithm to secure the information. The address of the sender (public key), the receiver’s address, the transaction, and his/her private key details are transmitted via the SHA256 algorithm. The encrypted information, called hash encryption, is transmitted across the world and added to the blockchain after verification. The SHA256 algorithm makes it almost impossible to hack the hash encryption, which in turn simplifies the sender and receiver’s authentication. Proof of Work In a Blockchain, each block consists of 4 main headers. Previous Hash: This hash address locates the previous block. Transaction Details: Details of all the transactions that need to occur. Nonce: An arbitrary number given in cryptography to differentiate the block’s hash address. Hash Address of the Block: All of the above (i.e., preceding hash, transaction details, and nonce) are transmitted through a hashing algorithm. This gives an output containing a 256-bit, 64 character length value, which is called the unique ‘hash address.’ Consequently, it is referred to as the hash of the block. Numerous people around the world try to figure out the right hash value to meet a pre-determined condition using computational algorithms. The transaction completes when the predetermined condition is met. To put it more plainly, Blockchain miners attempt to solve a mathematical puzzle, which is referred to as a proof of work problem. Whoever solves it first gets a reward. Mining In Blockchain technology, the process of adding transactional details to the present digital/public ledger is called ‘mining.’ Though the term is associated with Bitcoin, it is used to refer to other Blockchain technologies as well. Mining involves generating the hash of a block transaction, which is tough to forge, thereby ensuring the safety of the entire Blockchain without needing a central system. History of Blockchain Satoshi Nakamoto, whose real identity still remains unknown to date, first introduced the concept of blockchains in 2008. The design continued to improve and evolve, with Nakamoto using a Hashcash-like method. It eventually became a primary component of bitcoin, a popular form of cryptocurrency, where it serves as a public ledger for all network transactions. Bitcoin blockchain file sizes, which contained all transactions and records on the network, continued to grow substantially. By August 2014, it had reached 20 gigabytes, and eventually exceeded 200 gigabytes by early 2020. Advantages and Disadvantages of Blockchain Advantages One major advantage of blockchains is the level of security it can provide, and this also means that blockchains can protect and secure sensitive data from online transactions. For anyone looking for speedy and convenient transactions, blockchain technology offers this as well. In fact, it only takes a few minutes, whereas other transaction methods can take several days to complete. There is also no third-party interference from financial institutions or government organizations, which many users look at as an advantage. Disadvantages Blockchain and cryptography involves the use of public and private keys, and reportedly, there have been problems with private keys. If a user loses their private key, they face numerous challenges, making this one disadvantage of blockchains. Another disadvantage is the scalability restrictions, as the number of transactions per node is limited. Because of this, it can take several hours to finish multiple transactions and other tasks. It can also be difficult to change or add information after it is recorded, which is another significant disadvantage of blockchain. Learn the Ins & Outs of Software Development How is Blockchain Used? Blockchains store information on monetary transactions using cryptocurrencies, but they also store other types of information, such as product tracking and other data. For example, food products can be tracked from the moment they are shipped out, all throughout their journey, and up until final delivery. This information can be helpful because if there is a contamination outbreak, the source of the outbreak can be easily traced. This is just one of the many ways that blockchains can store important data for organizations. Learn how to use Truffle or Remix - development tools for Ethereum DApps and smart contracts. Get an Ethereum account or wallet and buy some Ether (ETH), the currency of the Ethereum network. Decentralization Decentralization is difficult to Understand, but it is vital in the world today; decentralization is distributing or dispersing functions, powers, people, or things away from a central location or authority. Within the business world, decentralization typically refers to delegating authority from senior executives to middle managers and other employees further down the organizational hierarchy. The benefits of devolution are many and varied, but the most commonly cited advantages include improved communication, greater employee empowerment, and increased flexibility and responsiveness. Transparency One of the most critical aspects of decentralization is transparency. All employees have access to information and decision-making processes in a decentralized organization. This transparency fosters a greater sense of trust and cooperation among employees. Furthermore, it allows employees to hold managers accountable for their decisions. Learn the Ins & Outs of Software Development Bitcoin vs. Blockchain Bitcoin is a digital currency that was first introduced in 2009 and has been the most popular and successful cryptocurrency to date. Bitcoin's popularity is attributed to its decentralized nature, which means it doesn't have a central authority or bank controlling its supply. This also means that transactions are anonymous, and no transaction fees are involved when using bitcoin. Blockchain is a database of transactions that have taken place between two parties, with blocks of data containing information about each transaction being added in chronological order to the chain as it happens. The Blockchain is constantly growing as new blocks are added to it, with records becoming more difficult to change over time due to the number of blocks created after them. Blockchain vs. Banks Blockchain has the potential to revolutionize the banking industry. Banks need to be faster to adapt to the changing needs of the digital age, and Blockchain provides a way for them to catch up. By using Blockchain, banks can offer their customers a more secure and efficient way to conduct transactions. In addition, Blockchain can help banks to streamline their operations and reduce costs. Why is Blockchain Important? Blockchain is important because it has the potential to revolutionize the banking industry. Banks need to be faster to adapt to the changing needs of the digital age, and Blockchain provides a way for them to catch up. By using Blockchain, banks can offer their customers a more secure and efficient way to conduct transactions. In addition, Blockchain can help banks to streamline their operations and reduce costs. What is a Blockchain Platform? A blockchain platform is a shared digital ledger that allows users to record transactions and share information securely, tamper-resistant. A distributed network of computers maintains the register, and each transaction is verified by consensus among the network participants. Proof of Work (PoW) vs. Proof of Stake (PoS) Proof of work (PoW) is an algorithm to create blocks and secure the Blockchain. It requires miners to solve a puzzle to create a block and receive the block reward in return. Proof of stake (PoS) is an alternative algorithm for securing the Blockchain, which does not require mining. Instead, users must lock up some of their coins for a certain time to be eligible for rewards. Energy Consumption Concerns of Blockchain The main concern with blockchain technology is its energy consumption. Traditional blockchains like Bitcoin and Ethereum, use a consensus mechanism called PoW( Proof of Work), which requires computational power and electricity to solve complex mathematical puzzles. This energy-intensive process has raised concerns about the environmental impact of blockchain technology because it produces carbon emissions and consumes a huge amount of electricity. Blockchain is a distributed database that maintains a continuously growing list of records called blocks. Blockchain is often said to have the potential to disrupt many industries, including banking, law, and healthcare. Learn the Ins & Outs of Software Development What are the Benefits of Blockchains Over Traditional Finance? Blockchain offers several potential advantages over traditional finance. One of the most touted advantages is that Blockchain is decentralized, while traditional finance is centralized. This means there is no single point of failure in a blockchain system. Another advantage of Blockchain is that it is more transparent than traditional finance. Promising Blockchain Use Cases and Killer Applications Promising Blockchain Use Cases and Killer Applications: Although there are many potential applications for blockchain technology, there are a few that stand out as having the potential to be truly game-changing. These are often referred to as killer applications. Some of the most promising killer applications for blockchain technology include supply chain management, identity management, and data management. Promising blockchain use cases and killer applications are being developed every day. The Shiba Inu team is committed to finding and developing the most promising applications for the SHIB community. The team has a proven track record in the cryptocurrency space, and they are committed to creating value for the SHIB community. How to Invest in Blockchain Technology Blockchain technology and stocks can be a lucrative investment, and there are several ways to take the next step toward making your first blockchain investment purchase. Bitcoin is typically the first thing that comes to mind when it comes to investing in blockchain technology, and it shouldn’t be overlooked. Aside from Bitcoin, there is also the option of investing in cryptocurrency penny stocks, such as Altcoin and Litecoin. There are also certain apps and services that are in the pre-development phase and that are using blockchain technology to raise funding. As an investor, you can buy coins, with the expectation that prices will go up if the service or app becomes popular. Another way to invest in blockchain technology is to invest in startups built on blockchain technology. Finally, there is always the option to invest in pure blockchain technology. Traditional Finance and Blockchain Investment Strategies In traditional finance, there are two main investment strategies: active and passive. Active investing involves picking stocks or other assets, and then holding onto them for a long period of time. Passive investing, on the other hand, involves investing in a basket of assets, and then holding onto them for a long period of time. Both of these strategies have their pros and cons, but there is one major difference between them: active investing is much more risky than passive investing. How Do Different Industries Use Blockchain? Blockchain has the potential to streamline processes across many different industries. In the supply chain industry, for example, Blockchain can track the movement of goods and materials as they change hands. This would allow for greater transparency and accountability and reduce the risk of fraud. In the healthcare industry, Blockchain can be used to secure patient data and streamline the process of billing and claims. Learn the Ins & Outs of Software Development What are the Features of Blockchain Technology? Blockchain technology is a distributed ledger that is secure, transparent, and immutable. Blockchain technology can be used to create a decentralized database that is tamper-proof and has the potential to revolutionize the way we interact with the digital world. Blockchain technology is secure, transparent, and tamper-proof. What are the Key Components of Blockchain Technology? There are three key components to blockchain technology: The distributed ledger, the consensus mechanism, and the smart contracts. The distributed ledger is a database that is spread across a network of computers. The consensus mechanism is what allows the network of computers to agree on the state of the ledger. The smart contracts are what allows the blockchain to be used for more than just a database. What are Blockchain Protocols? The three most common protocols Bitcoin was the first blockchain protocol and is still the most widely used is: Bitcoin- Bitcoin is a decentralized digital currency, often referred to as a cryptocurrency. It exists on a decentralized network of computers, often called a blockchain, that keeps track of all transactions made using the currency. Bitcoin uses a proof-of-work algorithm to validate transactions and add them to the blockchain. Bitcoin was the first cryptocurrency to be created and is the most well-known. Ripple- Ripple is a cryptocurrency that is similar to Bitcoin. Ripple uses a decentralized network of computers to keep track of all transactions made using the currency. Ripple uses a proof-of-work algorithm to validate transactions and add them to the blockchain. Ripple was created in 2012 and is the second largest cryptocurrency by market capitalization. Ethereum- The Ethereum blockchain was initially described in a white paper by Vitalik Buterin in 2013. Buterin, a programmer who was born in Russia and raised in Canada, had been involved with bitcoin from its early days. He was excited by the technology, but he thought that bitcoin needed a scripting language for application development. He decided to create a new platform that would be more general than bitcoin. Learn the Ins & Outs of Software Development What is the Difference Between a Database and a Blockchain? So what is the difference between a database and a blockchain? A database is centralized, meaning that a single entity controls it. This entity can be a company, government, or individual. On the other hand, a blockchain is decentralized, meaning that any entity does not control it. How is Blockchain Different From the Cloud? Blockchain is a new technology that is different from the cloud in several ways:  Blockchain is decentralized, while the cloud is centralized. This means that Blockchain is distributed across a network of computers, while the cloud is stored on a central server.  Blockchain is immutable, meaning that once data is written to the Blockchain, it cannot be changed. What is Blockchain as a Service? Blockchain as a Service is a cloud-based offering that allows customers to build, host, and use their blockchain applications, smart contracts, and functions on the Azure cloud platform. Azure offers integrated services that make it easy to develop, deploy, and manage blockchain applications. Customers can use Azure's managed services to create and deploy blockchain applications without having to set up and manage their infrastructure. Learn the Ins & Outs of Software Development What are the Implications of Blockchain Technology? Bitcoin, Blockchain’s prime application and the whole reason the technology was developed in the first place, has helped many people through financial services such as digital wallets. It has provided microloans and allowed micropayments to people in less than ideal economic circumstances, thereby introducing new life in the world economy. The next major impact is in the concept of  TRUST, especially within the sphere of international transactions. Previously, lawyers were hired to bridge the trust gap between two different parties, but it consumed extra time and money. But the introduction of Cryptocurrency has radically changed the trust equation. Many organizations are located in areas where resources are scarce, and corruption is widespread. In such cases, Blockchain renders a significant advantage to these affected people and organizations, allowing them to escape the tricks of unreliable third-party intermediaries. The new reality of the Internet of Things (IoT) is already teeming with smart devices that — turn on your washing machines; drive your cars; navigate your ships; organize trash pick-up; manage traffic safety in your community — you name it! This is where blockchain comes in. In all of these cases (and more), leveraging blockchain technology by creating Smart Contracts will enable any organization to ‒ both — improve operations and keep more accurate records. Blockchain technology enables a decentralized peer-to-peer network for organizations or apps like Airbnb and Uber. It allows people to pay for things like toll fees, parking, etc. Blockchain technology can be used as a secure platform for the healthcare industry for the purposes of storing sensitive patient data. Health-related organizations can create a centralized database with the technology and share the information with only the appropriately authorized people. In the private consumer world, blockchain technology can be employed by two parties who wish to conduct a private transaction. However, these kinds of transactions have details that need to be hammered out before both parties can proceed: What are the terms and conditions (T&C) of the exchange? Are all the terms clear? When does the exchange start? When will it finish? When is it unfair to halt the exchange? Since blockchain technology employs a shared ledger, distributed ledger on a decentralized network, all parties involved can quickly find answers to these questions by researching “blocks” in the “chain.” Transactions on a blockchain platform can be tracked from departure to the destination by all of the transactions on the chain. How Can Features of Blockchain Support Sustainability Efforts In spite of huge energy consumption the blockchain technology has features that can support sustainability efforts. For example: Blockchain can give transparency and traceability in supply chains, allowing consumers to verify the origins and sustainability of products. This can encourage sustainable practices and discourage unethical practices such as deforestation, illegal fishing, or labor exploitation. Decentralization: Blockchain's decentralized nature helps to eliminate the need for intermediaries and reduce costs and also to increase efficiency. This can enable more direct and transparent transactions, reducing the environmental impact associated with traditional intermediaries. Smart Contracts: These are self-executing contracts that run on blockchain which eliminates the requirements for intermediaries and automating processes. This can reduce paperwork to minimize the disputes and streamline operations. It can help to lead to greater sustainability by reducing paper waste by increasing resource utilization. Tokenization: Blockchain enables tokenization where assets can be represented as digital tokens. This can enable fractional ownership and make the process easier for people who intend to invest in sustainable assets such as renewable energy projects or carbon credits, promoting green investments and supporting sustainability initiatives. Conclusion Although we just skimmed the industry-wide potential of blockchain applications in this article, the career potential in this field is growing exponentially. Getting ahead of the game is always a good strategy for any professional. At Simplilearn, our latest and most up-to-date course on this emerging field is the Professional Blockchain Certificate Program in Blockchain. In partnership with the world-renowned university, IIT Kanpur, this program will help you get on track. In this blockchain program, you will learn how to master blockchain concepts, techniques, and tools like Truffle, Hyperledger, and Ethereum to build blockchain applications and networks. FAQs 1. What is Blockchain in Simple Terms? Blockchain is a shareable ledger that records transactions and is difficult to modify or change. It also tracks tangible and intangible assets such as cash or a house. 2. How Many Blockchains Are There? There are 4 types of blockchain networks currently - public blockchains, private blockchains, consortium blockchains, and hybrid blockchains. 3. What’s the Difference Between a Private Blockchain and a Public Blockchain? Private blockchains are only open to selected people, while public blockchain is open to the general masses. Private blockchains are more secure compared to public ones. 4. What is a Blockchain Platform? A Blockchain Platform is any platform that exists to support or facilitate Blockchains. There are many types of blockchain platforms for different needs, such as Ethereum, Hyperledger, etc. 5. Who Invented Blockchain? Blockchain was created by unknown persons under the pseudonym Satoshi Nakamoto when they designed the online currency, Bitcoin. 6. What is Blockchain used for? While most popularly used for digital currency such as Bitcoin, Blockchain is also now used in different sectors to safeguard records. 7. What are the 3 Pillars of Blockchain Technology? Decentralization, Transparency, and Immutability are the 3 main pillars of blockchain technology. 8. Who Controls the Blockchain? In blockchain, the power is divided between all of the users operating on the network. No single user has any control. 9. Why is Blockchain Important? Blockchain offers security, transparency, and trust between the entire network of users. It also offers cost saving and efficient methods for data recording and sharing. Find our Professional Certificate Program in Blockchain Online Bootcamp in top cities: About the Author Ravikiran A S works with Simplilearn as a Research Analyst. He an enthusiastic geek always in the hunt to learn the latest technologies. He is proficient with Java Programming Language, Big Data, and powerful Big Data Frameworks like Apache Hadoop and Apache Spark.
Mining involves generating the hash of a block transaction, which is tough to forge, thereby ensuring the safety of the entire Blockchain without needing a central system. History of Blockchain Satoshi Nakamoto, whose real identity still remains unknown to date, first introduced the concept of blockchains in 2008. The design continued to improve and evolve, with Nakamoto using a Hashcash-like method. It eventually became a primary component of bitcoin, a popular form of cryptocurrency, where it serves as a public ledger for all network transactions. Bitcoin blockchain file sizes, which contained all transactions and records on the network, continued to grow substantially. By August 2014, it had reached 20 gigabytes, and eventually exceeded 200 gigabytes by early 2020. Advantages and Disadvantages of Blockchain Advantages One major advantage of blockchains is the level of security it can provide, and this also means that blockchains can protect and secure sensitive data from online transactions. For anyone looking for speedy and convenient transactions, blockchain technology offers this as well. In fact, it only takes a few minutes, whereas other transaction methods can take several days to complete. There is also no third-party interference from financial institutions or government organizations, which many users look at as an advantage.  Disadvantages Blockchain and cryptography involves the use of public and private keys, and reportedly, there have been problems with private keys. If a user loses their private key, they face numerous challenges, making this one disadvantage of blockchains. Another disadvantage is the scalability restrictions, as the number of transactions per node is limited. Because of this, it can take several hours to finish multiple transactions and other tasks. It can also be difficult to change or add information after it is recorded, which is another significant disadvantage of blockchain.
no
Cryptocurrency
Can Bitcoin and other cryptocurrencies be manipulated easily?
yes_statement
"bitcoin" and other "cryptocurrencies" can be "easily" "manipulated".. manipulating "bitcoin" and other "cryptocurrencies" is easy.
https://freemanlaw.com/mining-explained-a-detailed-guide-on-how-cryptocurrency-mining-works/
Mining Explained - A Detailed Guide on How Cryptocurrency Mining ...
Mining Explained: A Detailed Guide on How Cryptocurrency Mining Works Mining Explained: A Detailed Guide on How Cryptocurrency Mining Works At its peak, cryptocurrency mining was an arms race that led to increased demand for graphics processing units (GPUs). In fact, Advanced Micro Devices, a GPU manufacturer, posted impressive financial results as demand for the company’s stock skyrocketed and shares traded at their highest level in a decade. Despite the increased demand for GPUs, the crypto mining gold rush quickly came to an end, as the difficulty of mining top cryptocurrencies like Bitcoin increased just as quickly. Mining cryptocurrencies, however, can still be profitable. So, what is crypto mining, is it legal, and how can you get started? This article takes a closer look at these questions. What Is Crypto Mining? Most people think of crypto mining simply as a way of creating new coins. Crypto mining, however, also involves validating cryptocurrency transactions on a blockchain network and adding them to a distributed ledger. Most importantly, crypto mining prevents the double-spending of digital currency on a distributed network. Like physical currencies, when one member spends cryptocurrency, the digital ledger must be updated by debiting one account and crediting the other. However, the challenge of a digital currency is that digital platforms are easily manipulated. Bitcoin’s distributed ledger, therefore, only allows verified miners to update transactions on the digital ledger. This gives miners the extra responsibility of securing the network from double-spending. Meanwhile, new coins are generated to reward miners for their work in securing the network. Since distributed ledgers lack a centralized authority, the mining process is crucial for validating transactions. Miners are, therefore, incentivized to secure the network by participating in the transaction validation process that increases their chances of winning newly minted coins. In order to ensure that only verified crypto miners can mine and validate transactions, a proof-of-work (PoW) consensus protocol has been put into place. PoW also secures the network from any external attacks. Proof-of-Work Crypto mining is somewhat similar to mining precious metals. While miners of precious metals will unearth gold, silver, or diamonds, crypto miners will trigger the release of new coins into circulation. For miners to be rewarded with new coins, they need to deploy machines that solve complex mathematical equations in the form of cryptographic hashes. A hash is a truncated digital signature of a chunk of data. Hashes are generated to secure data transferred on a public network. Miners compete with their peers to zero in on a hash value generated by a crypto coin transaction, and the first miner to crack the code gets to add the block to the ledger and receive the reward. Each block uses a hash function to refer to the previous block, forming an unbroken chain of blocks that leads back to the first block. For this reason, peers on the network can easily verify whether certain blocks are valid and whether the miners who validated each block properly solved the hash to receive the reward. Over time, as miners deploy more advanced machines to solve PoW, the difficulty of equations on the network increases. At the same time, competition among miners rises, increasing the scarcity of cryptocurrency as a result. How to Start Mining Cryptocurrencies Mining cryptocurrencies requires computers with special software specifically designed to solve complicated, cryptographic mathematic equations. In the technology’s early days, cryptocurrencies like Bitcoin could be mined with a simple CPU chip on a home computer. Over the years, however, CPU chips have become impractical for mining most cryptocurrencies due to the increasing difficulty levels. Today, mining cryptocurrencies requires a specialized GPU or an application-specific integrated circuit (ASIC) miner. In addition, the GPUs in the mining rig must be connected to a reliable internet connection at all times. Each crypto miner is also required to be a member of an online crypto mining pool as well. Different Methods of Mining Cryptocurrencies Different methods of mining cryptocurrencies require different amounts of time. In the technology’s early days, for example, CPU mining was the go-to option for most miners. However, many find CPU mining to be too slow and impractical today because it takes months to accrue even a small amount of profit, given the high electrical and cooling costs and increased difficulty across the board. GPU mining is another method of mining cryptocurrencies. It maximizes computational power by bringing together a set of GPUs under one mining rig. For GPU mining, a motherboard and cooling system is required for the rig. Similarly, ASIC mining is yet another method of mining cryptocurrencies. Unlike GPU miners, ASIC miners are specifically designed to mine cryptocurrencies, so they produce more cryptocurrency units than GPUs. However, they are expensive, meaning that, as mining difficulty increases, they quickly become obsolete. Given the ever-increasing costs of GPU and ASIC mining, cloud mining is becoming increasingly popular. Cloud mining allows individual miners to leverage the power of major corporations and dedicated crypto-mining facilities. Individual crypto miners can identify both free and paid cloud mining hosts online and rent a mining rig for a specific amount of time. This method is the most hands-free way to mine cryptocurrencies. Mining Pools Mining pools allow miners to combine their computational resources in order to increase their chances of finding and mining blocks on a blockchain. If a mining pool succeeds, the reward is distributed across the mining pool, in proportion to the amount of resources that each miner contributed to the pool. Most crypto mining applications come with a mining pool; however, crypto enthusiasts now also join together online to create their own mining pools. Because some pools earn more rewards than others, miners are free to change pools whenever they need to. Miners consider official crypto mining pools more reliable since they receive frequent upgrades by their host companies, as well as regular technical support. The best place to find mining pools is CryptoCompare, where miners can compare different mining pools based on their reliability, profitability, and the coin that they want to mine. Is Crypto Mining Worth It? Determining whether crypto mining is worthwhile depends on several factors. Whether a prospective miner chooses a CPU, GPU, ASIC miner, or cloud mining, the most important factors to consider are the mining rig’s hash rate, electric power consumption, and overall costs. Generally, crypto-mining machines consume a considerable amount of electricity and emit significant heat. For instance, the average ASIC miner will use about 72 terawatts of power to create a bitcoin in about ten minutes. These figures continue to change as technology advances and mining difficulty increases. Even though the price of the machine matters, it is just as important to consider electricity consumption, electricity costs in the area, and cooling costs, especially with GPU and ASIC mining rigs. It is also important to consider the level of difficulty for the cryptocurrency that an individual wants to mine, in order to determine whether the operation would even be profitable. The Tax Implications of Crypto Mining Crypto miners will generally face tax consequences (1) when they are rewarded with cryptocurrency for performing mining activities, and (2) when they sell or exchange the reward tokens. With respect to (1), the IRS has issued Notice 2014-21 which directly addresses the tax implications of crypto mining. Under the Notice, a miner will recognize gross income upon receipt of the reward tokens in an amount equal to the fair market value of the coins at the time of receipt. Additionally, if a taxpayer’s mining activities constitute a trade or business or the taxpayer undertakes such activities as an independent contractor, the reward tokens/virtual currency payments are deemed to be self-employment income and accordingly, subject to self-employment taxes. Similarly, if a taxpayer performs mining activities as an employee, payments made in cryptocurrency are treated as wages subject to federal income tax withholding of Social Security/Medicare and unemployment taxes. Is Crypto Mining Legal? Most jurisdictions and authorities have yet to enact laws governing cryptocurrencies, meaning that, for most countries, the legality of crypto mining remains unclear. Under the Financial Crimes Enforcement Network (FinCEN), crypto miners are considered money transmitters, so they may be subject to the laws that govern that activity. In Israel, for instance, crypto mining is treated as a business and is subject to corporate income tax. In India and elsewhere, regulatory uncertainty persists, although Canada and the United States appear friendly to crypto mining. However, apart from jurisdictions that have specifically banned cryptocurrency-related activities, very few countries prohibit crypto mining. Our Freeman Law Cryptocurrency Law Resource page provides a summary of the legal status of cryptocurrency for each country across the globe with statutory or regulatory provisions governing cryptocurrency. Conclusion: The Sustainability of Crypto Mining For aspiring crypto miners, curiosity and a strong desire to learn are simply a must. The crypto mining space is constantly changing as new technologies emerge. The professional miners who receive the best rewards are constantly studying the space and optimizing their mining strategies to improve their performance. On the other hand, climate change advocates have become increasingly concerned, as more and more fossil fuels are burned to fuel the mining process. Such concerns have pushed cryptocurrency communities like Ethereum to consider switching from PoW frameworks to more sustainable frameworks, such as proof-of-stake frameworks.
Mining Explained: A Detailed Guide on How Cryptocurrency Mining Works Mining Explained: A Detailed Guide on How Cryptocurrency Mining Works At its peak, cryptocurrency mining was an arms race that led to increased demand for graphics processing units (GPUs). In fact, Advanced Micro Devices, a GPU manufacturer, posted impressive financial results as demand for the company’s stock skyrocketed and shares traded at their highest level in a decade. Despite the increased demand for GPUs, the crypto mining gold rush quickly came to an end, as the difficulty of mining top cryptocurrencies like Bitcoin increased just as quickly. Mining cryptocurrencies, however, can still be profitable. So, what is crypto mining, is it legal, and how can you get started? This article takes a closer look at these questions. What Is Crypto Mining? Most people think of crypto mining simply as a way of creating new coins. Crypto mining, however, also involves validating cryptocurrency transactions on a blockchain network and adding them to a distributed ledger. Most importantly, crypto mining prevents the double-spending of digital currency on a distributed network. Like physical currencies, when one member spends cryptocurrency, the digital ledger must be updated by debiting one account and crediting the other. However, the challenge of a digital currency is that digital platforms are easily manipulated. Bitcoin’s distributed ledger, therefore, only allows verified miners to update transactions on the digital ledger. This gives miners the extra responsibility of securing the network from double-spending. Meanwhile, new coins are generated to reward miners for their work in securing the network. Since distributed ledgers lack a centralized authority, the mining process is crucial for validating transactions. Miners are, therefore, incentivized to secure the network by participating in the transaction validation process that increases their chances of winning newly minted coins. In order to ensure that only verified crypto miners can mine and validate transactions, a proof-of-work (PoW) consensus protocol has been put into place. PoW also secures the network from any external attacks.
no
Cryptocurrency
Can Bitcoin and other cryptocurrencies be manipulated easily?
yes_statement
"bitcoin" and other "cryptocurrencies" can be "easily" "manipulated".. manipulating "bitcoin" and other "cryptocurrencies" is easy.
https://quillette.com/2023/01/24/the-crypto-token-industry-is-second-order-fraud/
The Crypto Token Economy Is Second-Order Fraud
The Crypto Token Economy Is Second-Order Fraud The cryptocurrency meltdown is regularly described as a liquidity crisis by industry insiders and uncritical media outlets. The story goes something like this: a downturn in crypto markets, perhaps the result of negative trends in the broader economy, triggered a liquidity crisis that led to cascading bankruptcies across the industry. By this telling, the trouble began back in May when the Terra (UST) stablecoin began to de-peg from the dollar as its sister cryptocurrency, Luna, crashed in value. The price of both cryptocurrencies fell to practically nothing within a few days, wiping out $US45 billion in market value. The immediate fallout resulted in a loss of value of $US300 billion across cryptocurrency markets within the week. (That figure has since grown to over $US2 trillion as prices have continued to slump.) Highly leveraged cryptocurrency investment firms suffered staggering losses. In June, Three Arrows Capital, a major crypto hedge fund that had borrowed heavily to leverage their own crypto investments, could not meet margin calls and was quickly forced into liquidation. With so many loans going into default, crypto lenders started to go under as well. At the time of liquidation, Three Arrows Capital owed lenders $US3.5 billion, with little ability to repay. Voyager Digital, a major crypto lender, was left on the hook for $US370 million in Bitcoin and another $US350 million in USDC stablecoins that they had loaned Three Arrows. Celsius Network, another major crypto lender, had loaned Three Arrows $US75 million in USDC—and that was just the beginning of their troubles. Suffering its own heavy investment losses, Celsius acknowledged a $US1.2 billion hole in its balance sheet. In truth, the hole was far larger, as their assets included billions in obscure cryptocurrencies issued by Celsius itself and similar firms, as well as almost a billion in loans to such entities. Though cryptocurrency is generally thought of as liquid—Bitcoin has been called “digital cash”—these more obscure digital assets proved illiquid and, ultimately, of little real value as the firms issuing them began to fail. Though not regulated as such, these crypto lenders were operating as banks, offering lavish returns to depositors putting up their own cryptocurrency as collateral. Without even FDIC insurance on their settlement accounts, depositors rushed to get funds out before the firms collapsed. Without sufficient cash on hand, Voyager and Celsius paused withdrawals before filing for bankruptcy in July. In November, FTX, a major cryptocurrency exchange branding itself as the responsible, good-faith actor in an otherwise dodgy industry, was the next domino to fall. Leaked balance sheets from FTX’s sister company, Alameda Research, revealed that the trading firm was holding most of its assets in FTX’s house “token,” FTT. This raised questions about the unusually close relationship between the two firms (it was later revealed that FTX was secretly and illicitly funneling depositors’ funds to Alameda to fund risky crypto investments), as well as their solvency. FTT, like similar assets held by recently failed crypto firms, was highly illiquid. In response to the leak, the CEO of Binance, the largest cryptocurrency exchange by trading volume, announced that it would liquidate its entire substantial holdings in FTT, which caused the token to crash in value. Following a now-familiar arc, depositors rushed to withdraw funds from FTX, forcing the exchange to pause withdrawals for lack of liquidity. Within five days, FTX, Alameda Research, and various subsidiaries—having been recently valued at well over $US40 billion, collectively—began the bankruptcy process as well. The industry contagion continues. Last week, Genesis, yet another leading crypto lender, also declared bankruptcy. The firm, a subsidiary of crypto venture capital firm Digital Currency Group, owes approximately $US3.5 billion to its top 50 creditors. Digital Currency Group, in addition to investing in hundreds of crypto companies, owns several other subsidiaries as well, including the major crypto asset management company Grayscale Investments, which claimed to hold over $US50 billion in digital assets as of 2021 and now, amid market uncertainty, refuses to show proof of its own reserves due to “safety and security” concerns. We can only speculate which firms go down next. The above narrative emphasizes the liquidity crisis spreading across crypto firms at risk of overlooking their fundamental insolvency. A liquidity crisis is a cash flow problem—immediate financial obligations cannot be met as they come due. While an accounting liquidity crisis can certainly lead to defaults and bankruptcy, the term implies that the organization is otherwise solvent. In the case of recently failed cryptocurrency firms, this was clearly not the case. During a liquidity crisis, distressed organizations seek out loans to cover immediate operating expenses. If they truly are solvent, they may well find lenders. Insolvent firms, on the other hand, usually cannot. No one wants to throw good money at organizations that are going to fail anyway—not other firms, not even central banks acting as lenders of last resort during industry-wide financial crises. Since central banks would not be bailing out unregulated cryptocurrency firms, they had only one another to turn to. Early in the crisis, FTX was known for shoring up or acquiring smaller crypto firms in financial trouble. They bailed out crypto lender BlockFi over the summer by offering a $US400 million lifeline of credit, which kept that firm alive until FTX also collapsed. With much of the cryptocurrency industry melting down, FTX had fewer places to turn, especially for a firm their size. Binance was the only cryptocurrency exchange doing more volume than FTX. But while Binance announced plans to save FTX through an acquisition and merger, they backed out the next day after a peek at their financials. A leaked balance sheet gives some insight into why. FTX was claiming $US9 billion in liabilities but only $US900 million in liquid assets. Most of their assets were marked either “less liquid” or “illiquid.” As with other failed crypto firms, FTX was holding the lion’s share of their assets in obscure cryptocurrencies issued by the firm itself or other companies and projects with close ties to FTX or its disgraced CEO Sam Bankman-Fried. Crypto firms issue these obscure cryptocurrencies, which we can refer to collectively as “house” tokens for convenience, to facilitate trades, settle debts, issue loans, post collateral, and conduct other financial transactions while remaining in the insular and poorly regulated cryptocurrency space. These tokens allow firms, as well as their customers, to transact without having to involve traditional financial institutions, at least until someone wants to cash out of the crypto space. Some of these house tokens are stablecoins pegged to a fixed amount (usually the dollar), but many fluctuate in price on markets, just like any other financial asset. Such house tokens may be branded as “security tokens,” when they are supposed to explicitly confer ownership of assets or debt, “governance tokens,” if they are intended to confer a kind of “voting share” to be executed on the blockchain, or simply a “utility token” when primarily intended to be used on a native platform. But no matter what their originally intended or ostensible use case, these tokens are often traded between firms as payment, loans, or collateral. When used in this manner, they all function as unregulated securities. (This is arguably true of stablecoins, too, which are also used for loans and collateral, as their value depends upon the health and survival of the issuing company defending the peg.) Many big crypto firms issue such house tokens. FTX had their FTT tokens, Voyager Digital the Voyager Token, and Celsius their CEL tokens. Unlike Bitcoin, or even Ethereum and Dogecoin, these tokens are not well known outside of cryptocurrency spaces and have little appeal to the masses. As such, cryptocurrency firms often generate retail demand for house tokens—which helps confer at least some level of liquidity and market valuation—by offering users various rewards. FTX gave traders discounts for using FTT. Crypto lenders, including Celsius and Voyager, have offered depositors what are effectively crypto “savings accounts” with annual percentage yields as high as 20 percent or more, an obscene return unseen in regulated financial markets. Similar offerings can be found in the world of decentralized finance, or “DeFi” for short. Terraform Labs, creator of Terra and Luna, created demand for their tokens by offering depositors similarly too-good-to-be-true returns through an automated lending program, the Anchor Protocol. But whether these programs are executed automatically “on the blockchain” or managed by a boring old spreadsheet in an accounting office, they serve an identical purpose: generating retail demand by offering returns that are only sustainable as long as new money keeps coming into the system. Critics, as well as regulators, have described these digital assets and projects as rather obvious Ponzi schemes. Despite choosing not to acquire FTX, Binance CEO and cofounder Changpeng “CZ” Zhao cannot have been too surprised by what he saw on their balance sheet. His cryptocurrency exchange has its own platform-specific utility token—the Binance Token (BNB), as well as a native stablecoin, BUSD. Binance appears to operate in much the same way as other troubled and failed cryptocurrency projects and firms. Unsurprisingly, Binance also appears headed in much the same direction. The exchange has suffered $US12 billion in outflows in recent months, at one point temporarily pausing some withdrawals, though the company contends this is all business as usual. (This may well be true, but other troubled crypto firms offered similar assurances only to announce bankruptcy shortly thereafter.) To shore up confidence, Binance released limited internal reviews—particularly uncharacteristic for a notoriously secretive firm—though their internal finances remain a “black box.” BNB has shed significant value in recent weeks due to investor concerns, and, while the company hasn’t entered collapse yet—at least not publicly—reasonable observers may get the feeling that we have seen this one before. The prototype for house tokens is the controversial stablecoin Tether (USDT), which originally launched in 2014 (under the name Realcoin). The various companies and shell companies responsible for issuing USDT (hereinafter referred to in this article as “Tether” for simplicity) share ownership and executive leadership with the Bitfinex cryptocurrency exchange, a relationship the firms sought to obscure and deny until it was confirmed by the Paradise Papers in 2017. Bitfinex has long struggled to maintain stable banking partnerships, but tethers—functioning as little $US1 IOUs—allow trades to be settled on blockchain, which also offers interoperability across crypto markets. Within the world of cryptocurrency, tethers have been just as good as dollars for almost a decade now. Many amateur traders and investors may not even be aware that settlement accounts on many crypto exchanges are denominated in tethers, not actual dollars. There are currently over 66 billion tethers in circulation, down from a high of over 83 billion last year. Tether initially lied about the stablecoin being backed one-for-one by cash—for which it paid $US41 million in fines in 2021—and has repeatedly changed or walked back claims about their reserves. Tether claimed to hold a large amount of “commercial paper”—essentially corporate IOUs—until the collapse of other crypto firms holding illiquid assets created enough fear around Tether for it to slip five percent off its peg in May. Presumably in response, Tether announced that their reserves no longer held commercial paper. Their latest attestation claims that their reserves are “extremely liquid” and include almost $US40 billion in US Treasury bills, but given their history of misrepresentation and refusal to undergo a real third-party audit, such claims should be taken with a whole shaker of salt. Tether’s reserves matter because, unlike Bitcoin, there is no hard limit on how much Tether can go into circulation. Tether routinely mints the stablecoin by the billions and sends them off to cryptocurrency exchanges and firms around the world. (Prior to its collapse, FTX was Tether’s biggest customer.) If these tokens are insufficiently collateralized, then Tether is basically printing “money” from thin air. While the company works to defend the peg and claims it can redeem tethers at face value, its terms of service make it clear they are under no obligation to do so. Critics, as well as litigants, have accused the company of using (apparently largely unbacked) Tether tokens to manipulate the price of cryptocurrency assets. John M. Griffin at the University of Texas, and Amin Shams at Ohio State University found that half of the rise in the price of Bitcoin during the 2017–2018 bubble was the result of price manipulation using Tether on the Bitfinex exchange. They concluded that the perpetrator was a single entity that was almost certainly the exchange or an accomplice. The allegations are certainly plausible. With limitless tethers at their disposal and a major crypto exchange in their possession, they could easily buy up Bitcoin and other cryptocurrencies to drive up the spot price. I have argued elsewhere that this kind of price manipulation renders cryptocurrency as a whole a giant decentralized Ponzi scheme and that a full ban on cryptocurrency is the best, and probably only, solution. Cryptocurrency markets are global. There is no realistic way for regulators to stop foreign entities from manipulating cryptocurrency prices with unbacked stablecoins. However, there are limits to how high Bitcoin prices can be artificially manipulated in this way. Most popular cryptocurrencies, including Bitcoin, employ a “proof of work” consensus mechanism for verifying updates to the blockchain. Critics sometimes mock this process as “proof of waste.” Cryptocurrency “miners,” which are simply network participants competing to solve pointless cryptographic puzzles for the right to approve transactions and collect a reward of cryptocurrency (a “block reward”), now waste unfathomable amounts of electricity. This waste is by design. The difficulty of the puzzles scales with the amount of total processing power thrown at the network—known as the “hash rate”—so that the cost of tampering with the network scales with the hypothetical reward for doing so, thus helping to ensure the integrity and security of the blockchain. But proof-of-work blockchains are only prohibitively expensive to attack because they are so expensive to run and maintain. This is precisely why mining difficulty scales with cryptocurrency prices. Crypto miners are locked in a perpetual arms race upon which the only hard cap is the price of the cryptocurrency being mined. The system incentivizes miners to add more and more processing capacity until mining costs exceed the profits from collecting block rewards. If stablecoin issuers are artificially inflating cryptocurrency prices, they are also necessarily driving up mining costs. But miners cannot pay utility bills with stablecoins. They need real cash to avoid shutting down or going into debt. Higher prices thus force miners to convert more of their earnings into actual cash. This places some limit on how high unbacked stablecoins can pump cryptocurrency prices without making the whole operation—including crypto miners—insolvent. At some point, using stablecoins to artificially inflate crypto prices will eat up all of the real cash liquidity coming into the cryptocurrency space, and the result will be a liquidity crisis that more stablecoins cannot fix. The limits that mining costs place on this kind of artificial price inflation are not just financial but also physical. Bitcoin mining alone—to say nothing of other proof-of-work coins—was using half of a percent of the world’s entire electricity consumption in 2022. Some reports have estimated that aggregate cryptocurrency mining activities in 2022 could have totaled almost one percent of global electricity production. So long as more energy remains available to miners, energy consumption will continue to scale linearly with price, according to economist Alex de Vries, who has been tracking cryptocurrency energy consumption since 2014. Bitcoin investors have become accustomed to bull runs that bring tenfold returns, maybe more. But Bitcoin prices 10 times the previous high would incentivize miners to use 10 times the energy—five percent of global electricity production. A subsequent bull run of the same magnitude would require half of the world’s current electricity production. I would say “and so on and so forth,” but you see the problem here. Of course, crypto miners cannot use electricity capacity that doesn’t exist, nor would most operate at a loss. The likely result of “overinflating” Bitcoin prices is that some miners would halt operations and the hash rate would fall until mining again became profitable. However, with Bitcoin prices still high, this would leave the network more vulnerable to a devastating “51% attack”—the very thing the system is designed to prevent. Manipulating cryptocurrency prices to a high-enough level to keep luring in new money without breaking the whole system is likely a careful balancing act that gets harder with each successive bull run. This helps explain the reduced returns. For years, crypto boosters pointed to the fact that Bitcoin had never crashed below the previous cycle’s all-time high as proof that it never would. But Bitcoin prices have spent much of the last six months well under the almost $US20,000 highs of the previous bubble set back in 2017. Though the current lows may represent an inflection point, the trend isn’t new. Bitcoin’s annual ROI has been trending down since its inception. Despite growing media coverage and hype, every bull run since at least 2013 has produced lower returns than the previous one. Stablecoins, artificial liquidity, and market manipulation cannot solve this problem. Proof-of-work blockchains simply require too much energy to operate at scale. Market manipulation has helped sustain interest in what is essentially a negative-sum investment for probably at least a decade now. But luring in new investors requires ever-higher prices, and ever-higher prices are creating ever-higher mining costs. The scheme is even less sustainable than traditional Ponzi schemes, which don’t require dedicating a growing share of new investors’ money toward massive processing centers that now rival the size of the entire world’s traditional data centers. Financial and resource limits place some theoretical hard limitations on growing the cryptocurrency ecosystem. But, ultimately, de Vries told me, the real limit on cryptocurrency mining—and, by extension, cryptocurrency itself—is likely to be political. Diverting so much energy toward crypto mining activity is neither tenable nor sustainable. Policymakers will eventually have to step in before miners consume anywhere near the entirety of global energy production. This is already happening. China banned cryptocurrency mining in 2021, which sent miners underground or fleeing to more permissive locales. The European Union is again considering a mining ban as the European energy crisis worsens. In the United States, where crypto mining already gobbles up as much as 1.7 percent of the nation’s electrical output, New York placed a moratorium on new cryptocurrency mining permits at fossil fuel plants. In Texas, where favorable regulatory conditions attracted more mining activity than any other state, the state’s grid operator has slowed the issuance of new permits due to added stress on an already-strained power grid. Nationally, the Biden administration is exploring cryptocurrency regulations, such as tighter controls on stablecoins and other digital assets and a possible ban on some crypto mining. The inability of stablecoins to manipulate the price of Bitcoin and other cryptocurrencies ever higher helps explain the emergence of increasingly complex financial schemes built atop crypto markets. Initial coin offerings (ICOs), undercollateralized security tokens, the Ponzi-like financial offerings of crypto lenders—these new digital assets are more easily managed and manipulated than the lumbering Bitcoin blockchain with its massive overhead. Such schemes are perhaps the only path forward for crypto in the face of diminishing returns from proof-of-work cryptocurrencies and the inability to manipulate their prices higher. Unfortunately for those orchestrating these projects, they are much more recognizable as Ponzi schemes and far easier to prosecute. Tether, and other such bad actors, allegedly conducted their fraud on shadowy foreign exchanges beyond the reach of regulators. They did so off the Bitcoin blockchain, which offers plausible deniability to “legitimate” regulated companies benefiting from artificially inflated cryptocurrency prices. By comparison, crypto firms issuing and artificially inflating the value of their house tokens are just plain old Ponzi schemes. They have proven much easier to identify and prosecute as such. Sam Bankman-Fried was indicted and arrested for, among other charges, his role in orchestrating securities and commodities fraud at FTX and Alameda Research. Voyager Digital is under investigation, as is Celsius Network. Do Kwon, CEO of Terraform Labs, is on the run after a South Korean court issued an arrest warrant for him on fraud and other charges. The Commodity Futures Trading Commission is suing Gemini—a prominent US-based crypto exchange operated by the Winklevoss twins—for misleading regulators about the workings of a Bitcoin futures product. In addition to charges against a mounting number of individuals running various crypto token Ponzi schemes too numerous to list here, the US Securities and Exchange Commission (SEC) just charged both Gemini and Genesis with selling unregistered securities. At this point, pretty much every major player in the industry appears to be under investigation, and the future of crypto looks bleak. In June 2015, YouTube user Alex Millar uploaded a video, now lore in cryptocurrency circles, recounting Bitcoin’s many boom-and-bust cycles. Tongue planted in cheek throughout the video, he warns viewers not to buy Bitcoin since “you know it’s gonna crash.” The video does the rounds on online crypto spaces whenever prices tumble. “Zoom out,” crypto boosters remind would-be new investors and “weak hands” considering selling out to stop losses. The implication is that, since Bitcoin has always recovered to new highs after every crash, so it shall again. These boom–bust cycles have become so routine that even mainstream media outlets now speak of “crypto winter” without reflection. The implication, again, is that no matter how bad things look now, someday the season will turn. So far, it always has, so I court an army of laser-eyed trolls merely suggesting that this time might be different. Forecasting speculative markets is always fraught, to say nothing of those so poorly regulated and highly manipulated as cryptocurrency markets. Those calling the end of Bitcoin or crypto have so far been proven wrong or—more likely—simply premature, so pardon me for hedging my bets, but I won’t go that far. Fraud, like life, finds a way. But if the price manipulation driving recent crypto bubbles is no longer financially viable or politically tenable, then crypto may well have entered a new era of diminished future prospects. Ethereum, a blockchain platform home to the second most popular cryptocurrency (Ether), may be charting a new path forward. In September 2022, after years of delay, Ethereum finally completed a software upgrade known as “the Merge” that moved the platform away from a proof-of-work consensus mechanism to a much less energy-intensive “proof-of-stake” system. The new system replaces crypto miners with validators who “stake” their own cryptocurrency in exchange for a yield. The switch has successfully reduced the energy consumption of the Ethereum blockchain by over 99.99 percent by doing away with mining entirely. Though the change has been years in the making, the timing of the Merge may not be so coincidental if rising mining costs are hamstringing crypto markets. While post-Merge Ethereum is far more environmentally friendly than its previous incarnation, the switch to proof-of-stake has caught the attention of regulators. Though the SEC has previously deemed Ether (and other proof-of-work cryptocurrencies) not to be securities, they may be reversing course after the Merge. SEC Chair Gary Gensler recently suggested that cryptocurrency exchanges offering staking—which is inherent to the proof-of-stake system—look “very similar” to crypto lenders. The SEC forced crypto lenders to register with the agency last year and fined BlockFi $US100 million for failing to do so. And, as we know, crypto lenders aren’t doing so well under increased regulatory scrutiny. Ethereum helped popularize “smart contracts” and became a foundation for DeFi and the broader crypto finance sector. Various ICOs, stablecoins, and other security tokens were built on Ethereum, many of which have been revealed as Ponzi schemes, big and small. Following the move to proof-of-stake, Ethereum now more clearly resembles the Ponzi schemes and sketchy firms using crypto to sidestep financial regulations that the platform hosts. It’s Ponzis all the way down, and always has been, but proof-of-work mining once helped obscure that fundamental truth. After the Merge, Ethereum is a more efficient Ponzi scheme at the cost of being a more transparent one. In the end, the greatest innovation of cryptocurrency may have been its ability to evade regulatory scrutiny. Blockchain—which is essentially just distributed append-only spreadsheets—was a remarkable mystifier when it involved proof-of-work. But the novelty and tangibility of crypto mining appear to have been indispensable to blockchain’s ability to confuse and obfuscate. Proof-of-stake projects are simply much easier to recognize as the Ponzi schemes they are. Now that excessive energy consumption has curtailed the expansion of the proof-of-work cryptocurrencies upon which the crypto industry has been built, the jig—it appears—is finally up. The scorched-earth behavior of some of the biggest players in the cryptocurrency space suggests they know the walls are finally coming down. The falling valuation of better-regulated crypto companies apparently operating mostly within the bounds of the law—Coinbase stock has been down as much as 90 percent from its 2021 IPO in recent weeks—suggests a poor outlook for even the “legitimate” firms operating in a sector driven by fraud once that fraud is excised. When your house is a Ponzi scheme built atop Ponzi schemes atop a Ponzi scheme, everything starts to come down when the base buckles.
Critics, as well as litigants, have accused the company of using (apparently largely unbacked) Tether tokens to manipulate the price of cryptocurrency assets. John M. Griffin at the University of Texas, and Amin Shams at Ohio State University found that half of the rise in the price of Bitcoin during the 2017–2018 bubble was the result of price manipulation using Tether on the Bitfinex exchange. They concluded that the perpetrator was a single entity that was almost certainly the exchange or an accomplice. The allegations are certainly plausible. With limitless tethers at their disposal and a major crypto exchange in their possession, they could easily buy up Bitcoin and other cryptocurrencies to drive up the spot price. I have argued elsewhere that this kind of price manipulation renders cryptocurrency as a whole a giant decentralized Ponzi scheme and that a full ban on cryptocurrency is the best, and probably only, solution. Cryptocurrency markets are global. There is no realistic way for regulators to stop foreign entities from manipulating cryptocurrency prices with unbacked stablecoins. However, there are limits to how high Bitcoin prices can be artificially manipulated in this way. Most popular cryptocurrencies, including Bitcoin, employ a “proof of work” consensus mechanism for verifying updates to the blockchain. Critics sometimes mock this process as “proof of waste.” Cryptocurrency “miners,” which are simply network participants competing to solve pointless cryptographic puzzles for the right to approve transactions and collect a reward of cryptocurrency (a “block reward”), now waste unfathomable amounts of electricity. This waste is by design. The difficulty of the puzzles scales with the amount of total processing power thrown at the network—known as the “hash rate”—so that the cost of tampering with the network scales with the hypothetical reward for doing so, thus helping to ensure the integrity and security of the blockchain. But proof-of-work blockchains are only prohibitively expensive to attack because they are so expensive to run and maintain.
yes
Cryptocurrency
Can Bitcoin and other cryptocurrencies be manipulated easily?
yes_statement
"bitcoin" and other "cryptocurrencies" can be "easily" "manipulated".. manipulating "bitcoin" and other "cryptocurrencies" is easy.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9159387/
Manipulation of the Bitcoin market: an agent-based study - PMC
Share RESOURCES As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health. Learn more: PMC Disclaimer | PMC Copyright Notice Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Abstract Fraudulent actions of a trader or a group of traders can cause substantial disturbance to the market, both directly influencing the price of an asset or indirectly by misinforming other market participants. Such behavior can be a source of systemic risk and increasing distrust for the market participants, consequences that call for viable countermeasures. Building on the foundations provided by the extant literature, this study aims to design an agent-based market model capable of reproducing the behavior of the Bitcoin market during the time of an alleged Bitcoin price manipulation that occurred between 2017 and early 2018. The model includes the mechanisms of a limit order book market and several agents associated with different trading strategies, including a fraudulent agent, initialized from empirical data and who performs market manipulation. The model is validated with respect to the Bitcoin price as well as the amount of Bitcoins obtained by the fraudulent agent and the traded volume. Simulation results provide a satisfactory fit to historical data. Several price dips and volume anomalies are explained by the actions of the fraudulent trader, completing the known body of evidence extracted from blockchain activity. The model suggests that the presence of the fraudulent agent was essential to obtain Bitcoin price development in the given time period; without this agent, it would have been very unlikely that the price had reached the heights as it did in late 2017. The insights gained from the model, especially the connection between liquidity and manipulation efficiency, unfold a discussion on how to prevent illicit behavior. Introduction Cryptocurrencies are a digital alternative to legal fiat money. Rather than being issued by competent governmental authorities, their implementation is based on the principles of cryptography used to validate all transactions and generate new currency. Every transaction that occurs is recorded in a public ledger.1 The blockchain, and more in general distributed ledgers, facilitate innovation in multiple domains of activity. These include, but are not limited to, supply chain management, data sharing, accounting, e-voting, or, as the most prominent area, finance [see, e.g., the overview in Casino et al. (2019)]. While it is indisputable that the blockchain by itself had and has a great influence on public discourse, with innovation potential comparable to that of the Internet (as it fosters a decentralized infrastructure for economic transactions), financial experts remain generally skeptical. The implementation and the characteristics (including the strictly technological ones) of blockchain technology, when proposed as a replacement for standard fiat currency are subject to ongoing discussion (Berentsen and Schär 2018; Dierksmeier and Seele 2018; Ertz and Boily 2019; Glaser and Bezzenberger 2015). A major problem surrounding cryptocurrencies—but also, one of the reasons why they have become well known to the general public—are the heavy tails of their return distribution (Chan et al. 2017) and their volatility (Bariviera 2017), resulting in a rich history of “bubbles” (Gerlach et al. 2018). Although the innovative potential of distributed ledger technologies is vast, the innovation itself does not necessarily translate into trust (see, e.g., Bodó 2021). Traditional markets and exchanges were fairly successful in establishing a trustworthy environment via governmental or international institutions, robust legislative activity, market regulations, and effective monitoring/oversight systems. This development took many decades after a long history of market abuse (Putniņš 2012), and remains an area of active research. It can be said that each new case of market abuse brought a better understanding of market vulnerabilities and often led to viable countermeasures. Furthermore, every new technology potentially brings new techniques for committing fraud. Now, cryptocurrencies, crypto-assets, and various forms of blockchain services are still in their infancy. Therefore, new methods need to be invented or reinvented for this new medium to establish a reliable and fair market environment, ideally while maintaining the decentralized and (semi)anonymous nature of the underlying blockchain technology. With this motivation, we focus in this study on one example where cryptocurrency market was supposedly manipulated via fraudulent actions of one market participant. A data-driven model is developed and validated using historical data. The behavior of the fraudulent entity is investigated in detail and included in the model. Toward the end, we conclude our investigations with a discussion on how our findings can be applied to improve trust by reducing the present vulnerabilities of crypto-markets. In the remainder of this section, we will provide a brief overview of the study on frauds on cryptocurrencies, on agent-based modeling (especially in the context of crypto markets), and we will then highlight the specific contributions of this paper. Fraud and cryptocurrencies Several illicit activities are related to cryptocurrencies, such as black-market trading (Foley et al. 2019), money laundering, and terrorist financing (Fletcher et al. 2021).2 In our case, we focus on fraud that targets and disrupts the market. A more common form of fraud in crypto markets is wash trading (Cong et al. 2020; Victor and Weintraud 2021). The principle of wash trading is to execute trades where the buyer and seller are the same entity. Thus, false impressions of highly traded assets are created to mislead investors. Another more serious form of fraud observed in crypto markets is pump-and-dump schemes (Kamps and Kleinberg 2018), which typically take the form of coordinated actions to increase the market price in a short time period (Hamrick et al. 2019; Li et al. 2018). In the literature, we find various studies that attempt to explain price as a direct consequence of manipulative behavior. A study (Gandal et al. 2018) analyzed suspicious market practices on the Mt.Cox exchange concludes that fraudulent actions influenced the price growth from $150 to $1000 in late 2013. More recently, Griffin and Shams (2019) argue that the Bitcoin market price might have been inflated by the issuance of Tether. As observed in a 2014 study (Robleh et al. 2014), Bitcoin and other cryptocurrencies served as a medium of exchange for a relatively small number of people; therefore, they pose no serious material risk to monetary and financial stability, but today investors increasingly involve crypto-assets in their portfolios, and some large companies or payment services are already accepting payments in Bitcoin. This means that cryptocurrency volatility can potentially be a new source of systemic risk to the entire economy and financial sector. Recent studies have approached risk using methods such as clustering (e.g., Li et al. 2021), multi-objective feature selection (e.g., Kou et al. 2021), or network analysis (e.g., Anagnostou et al. 2018). Focusing more on the source of systemic risk originating in illicit behavioral schemes, although advances in detection of wash trading (Victor and Weintraud 2021) and pump-and-dump schemes (Chen et al. 2019) are already taking place, new models are needed that can explain, simulate, or possibly predict the effects of fraudulent behavior, and that can serve as a testbed for testing the effectiveness of policies, regulations, or monitoring enforcement mechanisms. One way to satisfy this demand is to consider models that combine qualitative and quantitative knowledge, which can be designed with a strong reliance on empirical data and can simulate various scenarios to address questions regarding the effectiveness of regulatory interventions in the crypto market, as discussed in Shanaev et al. (2020). Agent-based modelling Agent-based models generally aim to explain some complex phenomena, where the emergent behavior at the macro-level is hypothesized to be a consequence of behavioral rules at the micro-level. For a historical review, we refer to Chen (2012). In recent years, this modeling paradigm has been enhanced by more modern data-driven approaches, where behavioral data specific to each agent are used to construct, initialize, or estimate the parameters of a model of each agent’s decision mechanism. Only a relatively small number of parameters are left to be calibrated for the aggregated data, which increases the model’s validity and credibility. With this approach, even large-scale models are capable of rivaling the predictive power of traditional quantitative methods, for example, in the area of economic research (Poledna et al. 2019). These models can be particularly instrumental if the parameters of individual agents are of vital importance, for example, to test interventions during the COVID-19 pandemic (Kerr et al. 2021). In the literature, several examples of agent-based models can be found that have been created to gain insights into crypto markets. Most of these models are based on various financial or behavioral assumptions. To the best of our knowledge, the first study in this area is Luther (2013), where agents are put into a currency market with switching costs and network effects to investigate the widespread acceptance of cryptocurrency. A similar question was studied by Bornholdt and Sneppen (2014). An implicit assumption of demand was made in Cocco et al. (2017), enhanced by speculative traders and restricted by finite resources for each agent, and is the earliest example of a limit order book-based model of the Bitcoin market attempting to explain the price increase from the start of 2012 to April 2014. This model was later extended by mining (Cocco and Marchesi 2016) and evolutionary computation (Cocco et al. 2019). Other order book models are presented in Pyromallis and Szabo (2019) and Zhou et al. (2017), where the focus is mainly on the adaptive behavior of traders. In Lee et al. (2018), a combination of inverse reinforcement learning directly from Bitcoin blockchain data and order book agent-based modeling was used to make short-term predictions of the market price. Recently, models focusing on policy recommendations have also been developed. Shibano et al. (2020) is introducing a price stabilization agent to reduce the volatility, and Bartolucci et al. (2020) investigates design extension of the Bitcoin blockchain to increase transaction efficiency. A strong aspect of the agent-based models is that they provide an experimental environment for policymakers. Once a behavioral schema is identified, methods to measure and assess the consequences are settled, and the consequences are measured; the simulated environment can be utilized to test the effectiveness of certain measures, that is, a set of alternative policies to be tested, given some adaptation rate, monitoring, enforcement, and identify the best one. In recent review (Lopez-Rojas and Axelsson 2016) agent-based models are considered as a tool for generating synthetic data for machine learning models, which can be used, for example, to complement more traditional evaluation methods (Kou et al. 2014). Most notably, agent-based models were developed in the area of urban crime modeling (Groff et al. 2019) or to study the behavioral aspects of tax evasion (Pickhardt and Prinz 2014). In principle, these models are not limited only to observed fraudulent behavior: they can extend the design of fraud committing agents by considering different schemes of market manipulation methods to measure and assess the consequences. By choosing a suitable representation of the fraud schema, it is possible to find more sophisticated patterns of reasoning for a fraudulent agent [e.g., by applying algorithmic evolutionary methods (Hemberg et al. 2016)]. Contributions Most studies focus on analyzing the statistical relationship between price and a set of exogenous variables. Conversely, in this study, we focus on the qualitative explanation dimension. Our approach builds on the qualitative findings in Griffin and Shams (2019), but, in contrast to this study, we construct a data-driven model, focusing mainly on the causal influence of the fraudulent behavior that supposedly inflated the Bitcoin price. This methodological innovation can be regarded as the main contribution of this study, along with the conceptualization of a specific fraud schema as an algorithm that can be executed by an agent in a simulated cryptocurrency market. Note that this approach opens the door to a broader view on the role of the fraudulent trader in the Bitcoin market, thus allowing to analyze the situation from various points of view. For instance, as our market model is capable of generating market data such as the market price, the market volume or the Bitcoin inflow of the fraudulent trader, it is possible to compare these quantities to empirical data. In particular, we discover that certain anomalies in market volume or dips in market price can be attributed to the actions of a fraudulent trader, an experimental conclusion, which completes the evidence presented in Griffin and Shams (2019). Furthermore, the model developed in this study allows us to investigate specific reasons behind the success of the market manipulation via the fraud schema. Connections between the efficiency of a specific manipulation strategy and transaction costs3 will be explored. To do so: a realistic model of order book liquidity has to be implemented. Most studies implicitly or explicitly assume sufficient liquidity near the mid-price and an exponential decrease in liquidity further away from the mid-price, using a Gaussian assumption, or more relaxed forms.4 We propose a new liquidity distribution model based on a mixture of two components. The Gaussian assumption is kept near the mid-price, and beta distribution is used to model the situation more deeply in the order book. The study of market manipulations (and their consequences) has a long tradition in the economic literature (Putniņš 2012). To the best of our knowledge, the present study is the first to construct an agent that reproduces the actions of a fraudulent trader directly using blockchain transaction data, and reconstructing the market behavior from this predictor. In addition, our simulation environment can be easily expanded with more sophisticated artificial intelligence models, thus contributing to the active area of research concerned by the integration of artificial intelligence with blockchain technology (Pandl et al. 2020; Salah et al. 2019). Focusing on the economic study dimension of the paper, most of the assumptions we formulate to construct the proposed computational model attempt to provide a sound story (based on previous studies analyzing the Bitcoin market) aiming to reconstruct market behavior in a given time period. Our findings might challenge the opinion that the main predictors of the Bitcoin bubble of late 2017 and the beginning of 2018 would be variables associated with the market sentiment (see Kapar and Olmo 2021). While we do not deny that market sentiment plays a major role, our results confront the thesis that the occurrence of this price bubble is spontaneous or a consequence of the widespread popularity of Bitcoin. In this sense, we contribute to the ongoing discussion among economists on the price formation of cryptocurrencies. Background This section elaborates on the alleged price manipulation using Tether in 2017/18, presenting the technology at stake, the associated socio-technical system, and considerations shared in the relevant literature. What is Tether and why is it controversial Tether is a cryptocurrency whose market price is pegged to the US dollar, making it one of the so-called stablecoins. The objective of Tether is to facilitate transactions between cryptocurrency exchanges, making them easier for traders than with fiat money because many exchanges have challenges in establishing banking relationships and meeting their strict regulatory requirements. Tether is issued by Tether Limited, which claims that every issued Tether is backed by one dollar. Tether Limited publishes end of month (EoM) statements to prove this. This claim is somewhat controversial from several points of view, as discussed in Griffin and Shams (2019), pointing out suspicious auditing methods. Publishing the statement about the reserves potentially gives leverage to the issuer to issue more Tether than the current amount of capital reserves in between the audits. Following a series of investigations started by the New York Attorney General Letitia James filing a suit in April 2019, Bitfinex and Tether agreed to pay a penalty of $18.5 million in a settlement in February 2021. Furthermore, on February 23rd, Attorney General James claimed that Tether had lied about its reserves.5 One of the first exchanges to accept Tether, and a close associate to Tether Limited by several shareholders, is the Bitfinex exchange. The analysis (Griffin and Shams 2019) exposed and analyzed suspicious flows of Tether from the Bitfinex exchange to other exchanges that accept Tether, mainly Bittrex and Poloniex. Before arriving at the target exchanges, the flow passes through several addresses on the Tether blockchain. Once the Tether is exchanged for Bitcoin, Bitcoin flows back to Bitfinex. As analyzed in their study, these flows were highly correlated with the price increase. Additionally, Griffin and Shams (2019) identified the dominant addresses and concluded that the addresses were likely controlled by the same individual. We will use these insights to model the manipulator’s behavior by observing the change in the balance of the most relevant address. Manipulation scheme The possibility of pushing Tether into the market gives rise to a simple price inflation scheme that can be placed into the category of pump-and-dump schemes. However, as will be explained later, it is even more “powerful” for dimensions in which the profit is generated. In its procedural essence, this scheme can be viewed as an algorithm, and its outline is visualized in Fig. 1 (note that in the real world, many more possibilities of action come into play depending on the circumstances, and the whole scheme can be much more complicated). The strategy of price inflation mostly relies on the assumption that the market will respond with positive feedback (inflow of buy orders) as a consequence of the Bitcoin buy orders executed by the fraudulent trader. Once the positive trend of the market price is established and sustained, the trader”s cash buffer can be refilled if needed, which means that there will be enough cash for the EoM statements to be satisfied. In principle, the positive feedback assumption is unnecessary because a long position is built up even if the market reacts negatively. However, in that case, an additional source of dollars to cover up the EoM statements would be needed; that is, an initial capital or a risk-bearing third party would have to be involved. Then, the trader can sustain the long position and wait until the market conditions are more favorable to restart the scheme. Price inflation scheme. Unbacked Tether is issued and pushed into Bitcoin market. The fraudulent trader must have enough cash to cover the EoM statements The profits generated by the scheme in the case of a positive response must be understood in two ways. First, to increase the value of Bitcoins, the fraudulent trader already has possession by triggering the inflow of new buyers. This is the main similarity to the pump-and-dump schemes. Second, as a way to obtain “free” Bitcoin. If the price increased sufficiently, the fraudulent trader would sell smaller amounts of Bitcoins for Dollars than the amount bought with Tether to cover the EoM statements; thus, there will be a surplus of Bitcoins. The crucial question that the fraudulent trader needs to address is deciding on the selling strategy. One plausible strategy would be to pump the price as high as possible and then sell a sufficient amount of Bitcoin by executing a sequence of sell orders a few days before the date of the EoM statement publication. For the reasons explained in later sections, we believe it is cost-effective if the sequence consists of very small sell orders; in this way, the liquidation process takes advantage of high liquidity near the current price, but it can also be harder to notice by the rest of the market participants, and so the price should not drop too drastically. The liquidation strategy via a sequence of small sell orders can be further enhanced by executing small sell orders on multiple exchanges. This would make it more challenging to trace the liquidation process; indeed, though the study of Griffin and Shams (2019) performs an analysis of the outflow from Bitfinex reserves during the times concurrent with the publication of the EoM statements, the question of where these flows end remains unanswered. Volume anomalies In Griffin and Shams (2019), it was concluded that Tether flows from suspicious addresses are correlated with the price increase. We extend these observations in the context of volume and influence on other traders. We argue that, it should be possible to see evidence of fraudulent traders selling their unlawfully obtained Bitcoins in the traded volume to satisfy the EoM statements. Indeed, if the fraudulent trader has an incentive to sell large amounts of Bitcoins within a span of a few days shortly before publishing the EoM statement, or at least somewhere around that time, it is expected that the volume in this time span would temporally increase both directly on the exchanges where the selling takes place and secondarily as a response of other traders reacting to increased amounts of sell orders. In both cases, such actions must be visible in the total Bitcoin trade volume and several large exchanges’ volumes. Data collection As the trade volumes of Poloniex and Bittrex were several times higher trade volumes than other large exchanges such as Coinbase or Bitflyer, we have decided not to use this data, as they probably experienced wash trading. Instead, we used traded volume data from exchanges that obtained a Bitlicense (Chohan 2018) issued by the New York State Department of Financial Services or had similarly reported volumes. We downloaded the volume data from https://data.bitcoinity.org and aggregated the trade volume of trustworthy exchanges (Bitfinex, Bitflyer, Bithumb, Bitstamp, Coinbase, and Kraken) and the total volume of other smaller exchanges. If Poloniex and Bittrex volumes were not artificially increased, we would naturally use their volumes for model validation; however, this was not the case. For this reason, we need to define what will be our reference exchange, which will serve as a reference when analyzing the simulations, to estimate how much influence the fraudulent agent has in terms of traded volume. We then take the volume data of trustworthy exchanges from the same source and take the averages over daily values. As the fraudulent agent was active on two exchanges, we multiply the averages by two. Data analysis Figure 2 reports the resulting aggregated volume. The red bars correspond to the fraudulent agent supposedly liquidating some of the Bitcoins to satisfy the schema in Fig. 1. We will refer to the days when the liquidation process takes place as EoM events because the chosen days generally correspond to the end-of-month statements published by Tether Limited on the 15th of every month. As the fraudulent trader likely had some initial capital, these days do not have to correspond exactly to the 15th of every month. The general pattern is that these spikes tend to occur every 2 months. As can be seen from Table ​Table1,1, especially in July, September, November, and January, the liquidation process seems to be matching the 15th day of the month very well. Additionally, we hypothesize that the blue and green bars in Fig. 2 correspond to the market responding to an increase or decrease in price as a consequence of actions performed by the fraudulent trader. The blue bars correspond to the volume increase due to an increase in buying, and the green bars correspond to an increase in selling. We refer to these days as large scale events (LSE). A possible explanation for these events is that some investors entering or leaving the market temporally increased the volume, triggering a secondary response from other traders. However, the true reason behind these volume anomalies remains an open question. Given the uncertainty and as this study aims to focus on the modeling of a fraudulent trader, we will not attempt to model LSEs as actions of some specific agents, but we will assume them in the simulation as prior knowledge (exogenous events). Inter-exchange influence and liquidity Before we start building the agent-based model of the market, it is important to discuss our assumption that influencing the price on two exchanges is sufficient to influence the market price across all other exchanges. The direct way in which one exchange can influence the price is by trading large volumes of Bitcoin. Most web services that report the price of Bitcoin calculate the price as an average over the last traded price on several exchanges, weighted by the traded volume. These services must have a way of detecting wash trading, but they can hardly filter out a fraudulent trade, such as the one described in previous sections. Therefore, if seemingly legal fraudulent trades of large volumes are executed on one exchange, then the reported price will be skewed by the activity of this exchange, diminishing the influence of the other exchanges. It is clear that if fraudulent buy orders are matched with sell orders with high limit prices, the calculated Bitcoin market price will consequently be pushed higher than the average price traded on other exchanges. A second way the activity on one exchange can influence the whole market is by traders observing price fluctuations on multiple exchanges and generating a profit by taking advantage of these small price differences. It was concluded in Chordia et al. (2008) that such an arbitrage activity, if stimulated by sufficient liquidity, results in higher price efficiency, which, in turn, results in a more stable market price unless new external information enters the market. However, in Marshall et al. (2018), analyzing a database of Bitcoin intraday data on 14 exchanges, including prices of 13 currencies, it was observed that cryptocurrency markets tend to be illiquid and hence less price-efficient. This means that there is a lower overall agreement on the price of Bitcoin. From this, it can be concluded that the variations in price across all major exchanges, given the low liquidity of Bitcoin, can increase price volatility. Indeed, in the same study, evidence shows that an increase in illiquidity corresponds with an increase in crash risk across all pairs when liquidity proxies are either the effective spread or price impact. This volatility–liquidity relationship was confirmed by some studies (Næs and Skjeltorp 2006; Tripathi et al. 2020; Valenzuela et al. 2015) from a quantitative point of view. Based on this argument, one might expect ascendancy among different cryptocurrency exchanges. The earliest study to investigate this question is Brandvold et al. (2015). This study discusses a leader–follower relationship between various exchanges, linking them to specific events regarding Chinese government policies or the arrest of the Silk Road black market owner (October 2, 2013). Interestingly, the Mt. Gox exchange was identified to have a large but decreasing information share in the market; however, during the period concurrent with the price manipulation period described in Gandal et al. (2018), the Mt. Gox exchange again established its dominant position in the market. This is not only consistent with previous arguments and provides an early example that manipulative behavior on one exchange can influence the price of the entire market. In conclusion, illiquidity and low agreement among traders about the price of Bitcoin create favorable conditions for a manipulation scheme to be executed successfully. In later sections, we extend the discussion on illiquidity in greater detail, showing that the way liquidity is distributed in the order book can provide an essential advantage for the fraudulent trader. Exchange model The level of granularity assumed for our investigation is a limit order book model in which orders are placed in a public order book. An order can be entered every second in the order book in cryptocurrency exchanges. In our exchange model, the orders can enter every minute to simplify processing, which means that each trading day d consists of T=1440 tics (minutes). We use the time index t to measure the time in the model in minutes, and we use the time index τ to measure the time in days; for example, pt denotes the price at time t, and pτ denotes the price at the end of a trading day τ. Limit order book market model The market environment is based on the model presented in Raberto et al. (2005). Each trader can observe the order book Ot at time t; that is, a table consisting of 5 columns: order type, Bitcoin amount, residual amount, limit price, issue day, and expiration day. With respect to the limit price, the buy orders are sorted in descending order and the sell orders are sorted in ascending order. Issue time is the second sorting criterion when the limit prices are equal. Each trading day is split into T tics during which traders can issue orders. If the issue day exceeds the expiration day, the order is removed from the order book. Market orders6 by setting the limit price to zero. At the time t, we denote Bj[Ot] as the limit price of the j-th buy order, and Si[Ot] as the limit price of the i-th sell order. The sell order of index i and the buy order of index j are matched if and only if Si[Ot]≤Bj[Ot]. The order-matching mechanism is defined as follows: if Si[Ot]=0 or Bj[Ot]=0: if Bj[Ot]>0, then pt←min(Bj[Ot],pt) if Sj[Ot]>0, then pt←max(Sj[Ot],pt) if Si[Ot]=0 and Bj[Ot]=0, then pt←pt if Si[Ot]>0 and Bj[Ot]>0, then pt←Bj[Ot]+Sj[Ot]2 Every time a new order enters the order book, the first sell and buy orders are inspected if they satisfy Si[Ot]≤Bj[Ot], and the new market price is decided according to the order-matching mechanism. As more than one order can be issued at time t, the last match at time t is the current price pt. We do not consider expiration times within a minute during the simulation because this would unnecessarily complicate the model. Expiration time, price and amount distributions One factor that determines the price and crucial property of every exchange is the order book depth. In principle, the order book depth is defined by the distribution of Bitcoin amounts and the limit prices placed in the order book by traders. In our environment, almost all traders decide the Bitcoin amount and limit price by sampling these two values from predefined distributions, thus filling the order book with orders. Based on the findings presented in Schnaubelt et al. (2019), we hypothesize that four main empirical properties are relevant to our study. broad hump-shaped (bimodal) distribution of limit prices; quickly rising transaction costs; relatively small volume concentrated around the mid-price, compared to total volume provided by the order book; both sides of the order book are on average symmetric with respect to the mid-price. We assume that the limit price and Bitcoin amount distributions are independent for simplicity. We assume that the bimodal shape of the limit price distribution is due to a mixture of two distributions. The first component is modeled by a Gaussian distribution N(μ,σ), with mean μ and variance σ. The second component, representing the tail of the limit price distribution, is modeled by a beta distribution Beta(α,β), where α,β are the shape parameters. To produce an on average symmetric distribution, the limit price in the former case is defined as pt·N(μ,σ) for buy orders and ptN(μ,σ) for sell orders. For the tail, we must introduce two additional parameters a, c: the location parameter a and the scale parameter c (Johnson et al. 1995). Then, the limit price of orders placed deeper into the order book is for buy orders: LimitPriceTailBuyt∼pt[1+c+(c-a)Beta(a,b)] 1a and for sell orders: LimitPriceTailSellt∼pt[1-c+(c-a)Beta(a,b)] 1b The second component defining market depth is the amount distribution. As we mainly control the transaction costs using the limit prices, the amount distribution is less important, but we will attempt to make it realistic nonetheless. Several characteristic properties of the amount distribution were observed empirically (Cong et al. 2020). The main characteristic to be captured is the bias of traders to certain “round” values, such as 0.5,1,1.5,2,⋯. We construct this distribution as a mixed discrete/continuous distribution consisting of a Poisson distribution and an exponential distribution of the form: Amounts∼(1-q)(0.5+0.5·Pois(λP))+q·Exp(λE) 2 where q∈[0,1] and λP,λE are rate parameters. Finally, the expiration time of an order influences the distribution of limit prices and amounts over time. Similar to Cocco et al. (2017), we use the floor value of the log-normal distribution with the parameters μL,σL. In the simulation, we set these parameters to relatively low values because it seems plausible to assume that traders will be cautious in keeping any order in the order book for too long, given the uncertainty about the Bitcoin price. In addition, we assume independence between the expiration time conditioned on price and amount. Agent models The success of a scheme used by the fraudulent trader depends on the response of the market. Therefore, we discuss the market response model or market response agents when referring to the response of the market to the actions of the fraudulent agent (FA). Market response agents Random agents Random agents (RAs) are issuing buy or sell orders with equal probability and hold with probability 1-PRA. The limit price is sampled from the Gaussian component defined above. Random speculative agents Random speculative agents (RSAs) are issuing buy or sell orders the same way as RAs. The limit price is sampled from the Beta distribution according to the Eqs. (1a) and (1b), which means the limit prices of their orders are relatively far away from the mid-price. Therefore, the RSA speculates that even orders placed deeper in the order book will be matched given the market’s volatility. The probability that the RSA will hold is 1-PRSA. Chartist agents Chartist agents (CAs) are observing the average of Bitcoin returns in the window [τ-l,τ] over which the average is taken. The probability that a CA will issue order is PCA. If the average return is positive, the CA issues a buy order; otherwise, a sell order. The limit price is sampled from the Gaussian component. CAs are active if the market price is above $50, and they follow their initial strategy until the price reaches $20000. Subsequently, the CA will decide with probability QCA to issue a sell order and with probability (1-QCA)PCA to continue the initial trend-following strategy. Parameter QCA can be interpreted as the CA belief that the price will drop after reaching its presumed maximum. If the price happens to decrease to $10000, the CA will return to a pure trend following [for this threshold price approach, see, for instance, Lee and Lee (2021)]. Fraudulent agent In principle, the fraudulent agent behavioral script is defined by the buying and selling schedules. The buying schedule is constructed directly from the available data on Tether outflows. The selling schedule is constructed following the discussion in previous sections, considering the empirical findings related to Bitcoin order book liquidity. Cash matrix A cash matrix C(t) defines the amount of cash that the FA will use to issue a buy order on a given day and minute. Using this capital, the FA calculates the amount of Bitcoins to buy from the order book and then issues a market order. Let us define bt as the amount of Bitcoin the FA has in possession at time t. The amount of Bitcoin to be obtained at bt+1 depends on the available cash allocated in the cash matrix and the state of the order book. The cash matrix was constructed from the amounts of Tether sent from the 1J1d and 1AA6 addresses, as identified in Griffin and Shams (2019), spanning 1 year and 3 months from January 1, 2017, to March 1, 2018. Ninety percent of Tether flows from Bitfinex to Poloniex go to the 1J1d deposit address, and 72% of Tether flows from Bitfinex to Bittrex go to 1AA6.7 If we identify one Tether with one USD, ignoring negligible fluctuations in the price of Tether, then these flows provide a compelling picture of the FA’s capital. As the timescale of the model is minutes per day, the Tether flows are aggregated per minute. As the market model is a scaled-down model of an exchange, the cash matrix also needs to be scaled down, which is done by multiplying the cash matrix element-wise with the scalar parameter s. Selling strategy The selling strategy is a strategy of the FA to liquidate a portion of the Bitcoins to refill the cash buffer and then satisfy the EoM statements. We claim that these selling days roughly correspond to the date when EoM statements are published by Tether Limited, which is the 15th of every month, but the FA does not need to meet this deadline strictly, given that the FA most likely have backup capital available. Although there are no strict consequences for the FA for not fulfilling the obligations in the model environment, we assume that if bt<0 at any point in time, the FA will exit the market to maintain a long position on the obtained Bitcoins. The exit of the FA typically occurs when the market response is not sufficiently positive, and the price is too low for the FA to regain capital by selling Bitcoins. If everything goes as planned, the FA will sell a small amount of Bitcoins every minute by issuing a limit sell order, decreasing the number of Bitcoins bt that the FA has in possession at time t. As the order book is relatively liquid near the mid-price, it is logical for the FA to issue only small sell orders and avoid large sell orders because of the rapid increase in transaction costs. Thus, the FA aims to obtain a fraction ci1440 of the total cash that was used to obtain Bitcoins, where ci are the coefficients in Table ​Table1,1, telling us how much of the cash is planned to be obtained on a specific day. The coefficients are calculated from empirical data by taking the values of the traded volume and dividing each value by a normalizing constant. For instance, if the traded volume on September 14 was 484601.8 Bitcoins and September 15 was 705641.0 Bitcoins, to obtain the coefficients, each value is divided by the sum 484601.8+705641.0; thus, 0.4071453+0.5928547=1. This means that on September 14, the FA plans to obtain 40.7% and the following day 59.3% of the capital deficit present in the cash buffer. Large scale events Volume anomalies that do not seem to be related to the actions of the FA are regarded as LSEs. While it might be possible to model these spikes in traded volume as actions of certain types of agents, we take an easier path of using the information present in the traded volume data. The dates in which LSEs occurred are extracted from Fig. 2 and listed in Table 2, together with a hypothesis on whether an LSE consisted of predominantly buy or sell orders, which is not possible to read from volume data alone, but can be assumed depending on the trend in the market price.8 This means that, in addition to standard trading activity during one day, an increase in trading activity is arranged by issuing more orders to reproduce the green and blue volume anomalies in Fig. 2. The magnitude of an LSE is defined by the number of orders issued on a given day, and the amount of Bitcoin bought or sold per order. As we do not have data records related to LSE events, we make the simplifying assumption that the orders during one LSE day arrive with a frequency f to trade amount ρ; that is, every f minutes a new market order is issued to buy or sell ρ Bitcoins. Additionally, depending on the exact date, the amount ρ is multiplied by a scaling factor such that the volume anomaly during the simulation matches the empirical volume anomaly. The scaling factors are listed in Table 2. Table 2 List of large scale events associated with volume spikes, that are not explained by EoM events Experiments and results To demonstrate the essential influence of FA on the market, four simulation experiments are presented: Non-manipulated scenarios: Base scenario Susceptible scenario Susceptible scenario with large scale events Manipulated scenario. Thus, the market price time series can be decomposed in terms of activity of agents. To ensure that the results are consistent in all three scenarios:, the model parameters are kept the same as listed in Table ​Table3,3, except for setting parameters defining the activity of excluded agents or events zero for each of the first three scenarios. In non-manipulated scenarios, the market price time series is the central quantity that provides information on the behavior of the underlying system. In the manipulated scenario, three more quantities related to the activity and influence of the FA are measured along with the price. These quantities are: The Market Price generated by the model is compared to the Bitcoin market price. The Volume generated by the model is compared to the reference exchange as defined in the section on volume anomalies. Both empirical and simulated volumes were normalized for comparison on the same scale. The Inflow of Bitcoin as obtained by the FA during the simulation is compared to the inflow of Bitcoin to the 1LSg address. As in the case of volume, both the empirical and simulated inflows were normalized. The Relative Influence of the FA is defined as the ratio of Inflow of Bitcoin and the Volume. In this case, normalization is not needed. Empirical data from January 1, 2017, to March 1, 2018, are used to calibrate the model parameters, and the results are visualized for each scenario (Abel 2015). Some parameters in Table ​Table33 were predefined based on empirical findings (see the “Discussion” section), and the rest of the parameters were calibrated using stochastic simultaneous optimistic optimization algorithm (Valko et al. 2013), except for parameter l, which was calibrated manually. More details about the calibration can be found in the “Appendix”. Non-manipulated scenarios In the base scenario, we set ρ=s=PCA=0, which means that the FA and CAs are not active, and the scaling factor of the additional amounts bought or sold during the LSEs is multiplied by zero. In the susceptible scenario, the CAs are active and issue orders with a given probability. We refer to this scenario as “susceptible” because, contrary to the base scenario, the market with CAs is prone to large price fluctuations. However, as will be apparent from the simulations, even if LSEs are included, the probability of a price reaching $20000 is rather unlikely. Base scenario This is a scenario where the market is in an equilibrium state, which is intuitive to be expected because, with no speculation present on the market and a sufficient amount of liquidity on both sides of the order book, a large fluctuation in the price is improbable to occur. By calculating the p value of the augmented Dickey–Fuller test for stationarity for each simulation of the base scenario, we obtain a distribution of p values, as depicted in Fig. 4a. From this histogram, we can see that the alternative hypothesis of stationarity dominates. Histograms related to non-manipulated scenarios. In subfigure (a) the histogram of p-values of Augmented Dickey-Fuller test calculated for each simulation of the base scenario is plotted with a red dashed line at value 0.05. In subfigures (b) and (c) the histograms of maximum values of the market price achieved during each simulation are plotted for susceptible scenario and susceptible scenario with LSEs, respectively Susceptible scenario This scenario includes agents that are following the trend, and therefore one can expect larger price fluctuations to be observed. However, although in this case, the stationarity test did not provide evidence for stationarity, the price time series is considerably “well-behaved.” Indeed, if we look at the histogram of the maximum values (Fig. 4b), there is only a minimal number of simulations that are capable of surpassing the $10000 Bitcoin price. Susceptible scenario with large scale events This scenario includes both the speculative behavior of the CAs and disturbances in the form of LSEs. As shown in Fig. 3, the mean value of the price temporarily shifts before the LSE sells orders to lower the price to its long-term value. Overall, this disturbance is insufficient to produce an increasing trend, even when CAs are present. Simulated market price time series in terms of activity of agents or presence of large scale events. Base scenario with only random agents and random speculative agents; susceptible scenario including Chartist agents; and susceptible scenario with large scale events included in the simulation. The green line is the median price with 20th, 50th and 95th prediction interval Manipulated scenario In this scenario, the FA is active during the simulation, and all parameters are set as shown in Table ​Table3.3. In Fig. 5, we can see the consequences of the presence of FA compared with the non-manipulated scenarios visualized in Fig. 3. The influence of EoM events is visible on the price time series and, together with LSEs, form spikes in the volume. Typically, the FA decides to hold a long position in 20–25% of the cases. The trajectories of these unfinished manipulation attempts are excluded from the figures because they represent a different market regime that needs a different dataset to be validated. Simulated market price and market volume with Fraudulent agent included during the simulation, along with the large scale events and all the agents of the response model. The empirical data (blue) are plotted against simulated median (green) with 20th, 50th and 95th prediction interval If everything goes as planned, the FA buys Bitcoins using allocated cash in the cash matrix, as shown in Fig. 6, where simulated Bitcoin inflows measured in the model are plotted against the inflow of Bitcoin into the 1LSg address. It can be seen that the Tether outflow encoded in the cash matrix is produced via the market simulation with almost the same Bitcoin inflow as that obtained from the real Bitcoin blockchain. By aggregating these simulated daily inflows, the Bitcoin balance bt is obtained and displayed in Fig. 6, where sudden drops owing to EoM events are visible. The balance increases approximately linearly between the drops, and a surplus of Bitcoin is produced over a longer period. Note that the surplus was produced only by executing Scheme ​Scheme1,1, and no resources (Tether or Dollar) were spent. In other words, other market participants paid a bill. Time series detailing the behavior of the fraudulent agent with respect to empirical data (blue); compared to the simulated median (green) with 20th, 50th and 95th prediction interval Limit order book market robustness The liquidity of the order book is a strong predictor of the success of a scheme defined by Fig. 1. Increasing liquidity by increasing the number of orders issued by random agents using parameters PRA and PRSA, or by increasing the amounts issued per order using parameters of the amount distribution, would be the most straightforward way to make the order book more liquid. In this case, assuming the FA would not adapt, the relative influence of the FA would decrease; thus, the market would be more resistant to manipulation attempts. What is perhaps less obvious is that not only the total amount of liquidity, but also the distribution of liquidity is a relevant factor. As noted previously, traders’ low agreement about the price of an asset is translated into the dispersion of the limit prices further away from the mid-price. Indeed, if traders agreed on the asset’s market price, they would put their orders much closer to the mid-price. More orders concentrated closer to the mid-price would result in lower transaction costs; therefore, the efficiency of the FA’s manipulation strategy should be lower. This hypothesis can easily be tested in our model environment. By increasing the parameter α, the orders with limit prices previously placed further away from the mid-price will now be placed closer to the mid-price because increasing the first shape parameter of the beta distribution, while keeping the second shape parameter equal to one, will move the mass of the density function toward the value of its location parameter a. This means that there are more orders with a limit price close to (1+a)pt for sell orders and close to (1-a)pt for buy orders. As shown in Fig. 7, by the increasing parameter α, the efficiency of the manipulation strategy decreases because the inflated price decreases. The maximal value of price time series averaged over 80 simulations is plotted against the parameter α of the Beta distribution controlling the liquidity deeper in the order book The consequence of the FA is that, despite buying more Bitcoin for the same amount of Tether, the price impact is lower because the FA’s buy orders do not match sell orders with as high limit prices, thus, changing the distribution of liquidity, in our case, by controlling the parameter α, has a similar effect as increasing the overall liquidity. Note that the parameter α has little effect during EoM events because the FA sells Bitcoin in small amounts, matching buy orders near the mid-price. As the FA has a virtually unlimited amount of Tether to push into the Bitcoin market, it is possible to issue more Tether. However, this would increase the risk associated with the given manipulation scheme; thus, the fraudulent trader would need to increase the backup capital or default in case of insufficiently positive market response. Indeed, by increasing the parameter α in our computational experiment, the number of FA defaults was higher. Furthermore, note that even if the FA manages successfully to execute the scheme, the profits would be lower, while the risk would increase. Discussion Methodological concerns In the present work, the design of the model follows an incremental strategy, increasing the complexity until a sufficiently good fit to the empirical data is obtained.9 This approach is well suited to this case study because the essential importance of the FA was demonstrated by decomposing the market price time series in terms of agents” activities. Given the high level of consistency of our assumptions with other empirical studies found in the economic literature and the satisfactory fit to empirical data related to the Bitcoin market, high confidence can be given to the modeling assumptions related to the principles behind the success of the manipulation scheme investigated in this study. Some of the parameter values in Table ​Table33 were set to match the empirical observations of Bitcoin limit-order books (Schnaubelt et al. 2019). It was observed that orders are placed as far as 50% from the mid-price, so we set c=0.5. The location of the local maximum in the hump-shaped average order book was observed to be approximately 1% from the mid-price. This fact is also reflected in the model by setting a=0.015. The parameters of the amount distribution (2) were similarly predefined, considering the findings in Cong et al. (2020). The calibration results agree with known empirical observations. As the RA issuing an order is higher than the probability of the RSA, most of the liquidity will be located near the mid-price. However, due to the relatively low value of the α parameter, it is still possible to observe orders further away from the mid-price, which is again in agreement with the findings in Schnaubelt et al. (2019). Although the model implements several realistic assumptions, many simplifications cause higher prediction errors. For instance, for the reasons described in the section discussing volume anomalies we deem a plausible assumption, that it was sufficient for the fraudulent trader to influence the price on Poloniex and Bittrex, which means that to model a manipulation on the entire Bitcoin market, it should be sufficient to model the manipulation using only one order book. However, such a simplification is not sufficient to fully consider EoM events. If the FA can liquidate Bitcoins on multiple exchanges in small amounts, then this process is more price-efficient than liquidating on a single exchange. This means that the influence of the real fraudulent trader could have been even slightly higher, and thus the parameter s is probably underestimated. The simulated data did not produce very good results, especially from the end of May until the end of July, roughly between the 2nd and 3rd EoM events. The activity of chartist traders likely depends on the average returns and the Bitcoin market value, which means that the CA ought to be less active if the price is low. This is not the case in the model because parameter PCA is constant. Moreover, to obtain a better fit for the empirical data, it would be necessary to include the flows from the dominant Tether addresses and the flows from all Tether addresses controlled by the fraudulent trader. It is also possible that the fraudulent trader followed a less aggressive selling strategy prior to the third EoM event and started the liquidation process before July 14, 2017. In the fragrant Bitcoin market, it is challenging to correctly identify the reasons behind some of the insufficiencies present in our model because even actions with negligible influence on the price in more liquid markets can significantly influence illiquid Bitcoin market. Regulatory implications The economic understanding going with the proposed model has important implications for the contemporary cryptocurrency market. A regulation where stablecoin providers must prove their capital not only once a month but in a much shorter time period is highly desirable to protect—the customers of these providers and other participants in the market—from being misled into a pump-and-dump scheme. Policymakers are slowly catching up with the industry in terms of legislative regulation. The European Union Commission proposed and agreed on a legal framework for cryptocurrencies,10 especially targeting stablecoins in their “Regulation on Markets in Crypto Assets” proposal. In the U.S., President Biden’s administration has also recently taken a proactive stand on stablecoin regulation.11 Individual governments can decide the strength of regulations in agreement to their long-term strategy and consider the consequences of their decisions concerning innovation. These decisions can be effectively implemented at the domestic level; however, there might be an incentive to avoid regulations in the case of exchanges, as they can pose the risk of a decrease in traded volume or engage in illicit behavior. In addition to the legislative regulations implemented in various countries, a different self-regulatory approach can be adopted. Regulations to protect the stability of a market by restricting trading mechanisms are already in place on FOREX markets, for instance, constraints on the maximum amount issued by one order, a maximum number of orders of a trader per day, or maximum limit price. Some of these simple restrictions have already been implemented on more regulated exchanges, such as Huobi or Coinbase. Another more invasive intervention is circuit breakers such as price limits or trading halts (Sifat and Mohamad 2019). These regulations would make it more challenging to facilitate manipulative activities but might be perceived as too restrictive, slowing down the sector’s growth. Following the discussion on Bitcoin limit order book market robustness, we can target a dynamic approach to prevent market manipulation without affecting daily trade traffic. Having a better understanding of how liquidity is linked to market manipulation, an exchange can implement a market surveillance system (Cumming and Johan 2008) to inspect liquidity distribution in real-time and predict the market impact of an issued order (Gu et al. 2008; Weber and Rosenow 2005). Then, the exchange can refuse to accept an order if there is suspicion that the order aims to create a sudden increase or decrease in the market price. Moreover, exchanges can search for fraudulent behavioral trading patterns in the order books, directly on the blockchain, in aggregated statistics, or even on public forums, and then evaluate the risk of the trading behavior being associated with fraudulent activity and either intervene by refusing to accept orders or report the suspicion to a relevant authority. As identified in this study, the typical (volume) pattern of Scheme ​Scheme11 is manifested in approximately periodic spikes in the traded volume. A well-designed monitoring system should be capable of detecting suspicious addresses that repeatedly issue buy orders with a relatively high predicted market impact on a few specific exchanges, followed by high Bitcoin liquidation in roughly periodic intervals on some different exchanges, thus probably engaging in the execution of Scheme ​Scheme1.1. It is likely that if such a monitoring system were implemented, the manipulation following Scheme ​Scheme11 would be ineffective. The advantage of the approach described above is that, on the blockchain, all transactions are public and immutable. Any monitoring system can access the full transaction history, which is usually not the case in traditional finance. This property offers, in principle innovation potential for sophisticated self-learning AI models to oversee market behavior. These models can be trained on historical datasets or simulated environments capable of reproducing fraudulent patterns, such as those presented in this study. However, one must be aware of the possible limitations that often arise from the adversarial nature of these systems. Therefore, present detection tools, therefore, might not be powerful enough to deal with more sophisticated fraud schemes, and more studies need to be done in this area. While it is true that the clear benefit for the exchanges in implementing regulatory systems to reduce or inhibit market manipulation would stabilize the market, this might be challenging to achieve without an overarching authority. Moreover, as to a certain extent exchanges benefit of fraudulent behavior, there might not be enough incentives to combat fraud: the short-term benefits of the current state of affairs may be more appealing than the long-term benefits of a reliable medium of exchange. For instance, in Kim et al. (2021), the effectiveness of money laundering reporting through exchanges is questioned. This study assumes that exchanges benefit from money laundering; reporting suspicious transactions can increase money laundering activity. One must be aware that a similar situation can occur when dealing with market manipulation. It can be argued that one of the main reasons for the widespread popularization of Bitcoin was the price increase orchestrated in 2017. Even though the exchanges likely knew about the issue,12 as apparent both from the statistical evidence presented in Griffin and Shams (2019) and EoM events reconstruction by our model, the manipulation continued. Conclusion and further research directions It was demonstrated that introducing a fraudulent agent with a price manipulation strategy could create a price bubble that would not occur or would occur only with practically zero probability. The model can also explain several quantitative phenomena. Most anomalies, such as dips in the market price or spikes in the market volume during 2017 and the beginning of 2018, were connected to the end-of-month statements of Tether Limited. We hypothesize that the remaining anomalies can be explained by the inflow of new investors in response to the positive trend in market price due to price manipulation. Additionally, the efficiency of a price manipulation scheme was connected to several studies on order book liquidity and price formation. Dependency on the shape of the liquidity distribution is discussed and demonstrated computationally. The results of our model provide important insights to further the understanding of exchange manipulation with possible impacts on the entire market. These findings can be fruitful for policymakers and regulators when designing suitable countermeasures against market abuse. In addition, the proposed countermeasures can be tested in a simulated environment, such as the one presented in this study or one similar to ours, going in the promising direction of deep integration of distributed ledger technologies and artificial intelligence. These research directions may be closely related to study-contingent economic arrangements or experimental financial instruments. Should a decentralized monetary system work; it seems essential to implement a set of regulations that prevent manipulation attempts, or at least make it more challenging to apply them successfully. This model can be extended in several ways. The two most obvious extensions are to use full information from the addresses related to the market manipulator, as in Griffin and Shams (2019), or to use detailed order book data, as in Schnaubelt et al. (2019), but directly for the exchanges involved. Combining the datasets of these two studies with our model can potentially remove some of the remaining misalignments and provide a better fit for market price, relative volume, and realized inflow. Furthermore, a more sophisticated approach can be adopted when designing the fraudulent agent and the response agents, a choice that would include more complex behavioral rules and allow the agents to be active on several exchanges. In particular, the fraudulent trader should be enabled to observe and act upon the liquidity situation in the order book, the response of the market, and the possible market abuse countermeasures that may be included in the simulated environment. Finally, if a sufficiently rich market model is attained, the knowledge and understanding obtained by analyzing its function can be used to update the trading infrastructure of Bitcoin. The methodology developed in this research area has the potential to be further generalized and applied to another novel economic and financial infrastructures. Abbreviations FA Fraudulent agent CA Chartist agent RA Random agent RSA Random speculative agent LSE Large scale event EoM End of month AI Artificial intelligence Appendix: simulation and calibration details In principle, we are interested to find such values of model parameters that provide a good fit to the price time series and do not overestimate the influence of exogenous elements such as the activity of FA or the magnitude of LSEs. This means that the accuracy of the model needs to be defined either as a multi-objective function, or as a single-objective function that sums weighted components of the multi-objective functions, where: the first component measures the error between generated and empirical market price; the second component measures the error between generated and empirical market volume; the third component measures the error between generated and empirical relative volume. In the optimization routine we choose the simpler weighted option. Furthermore, if the empirical and generated market volume is standardized, then the volume peaks already provide an information about the influence of the FA (through EoM events) and the influence of the LSEs both relative to the spikes in empirical volume. This means that by measuring the distance between the generated and the empirical volume during the EoM or LSE days, we already impose a penalty if the algorithm would decide to overestimate the influence of the EoM and LSE related parameters, namely s and ρ. Therefore the objective function measuring the accuracy of the model for the parameter vector θ=(σ,α,PCA,QCA,PRA,PRSA,s,ρ) with predefined values for μ,β,a,c,q,λP,λE,l can be simplified to: Err(θ)=1N∑τ=1N|pτ-p~τ|+wmaxτ∈D|vτ-u~τ| 3 This provides a compromise between complexity and accuracy. The weight is w=400. The symbols p~τ and u~τ denote the median time series taken over a collection of 16 trajectories of generated price and volume respectively, in order to counter the stochasticity of the model output. Most of the parameters of the model are relatively sensitive and since the response model agents do not have bounds on available capital, certain parameter configurations can cause the market price to grow exponentially, or to decline basically to zero. This extreme behavior mainly depends on the value of the parameter l. The parameter l and the bounds on the parameter vector θ were decided during the initial exploration of the simulation output. The bounds on the parameter θ are listed in table ​table44. Table 4 Parameter bounds used during the stochastic simultaneous optimistic optimization algorithm Author contributions PF identified the research question, designed and implemented the model, did literature review, acquisition of data, analysis of empirical data, model calibration and analysis of simulation output. GS and SK contributed with supervision, review, and editing. TvE contributed with supervision and review. All authors read and approved the final manuscript. Declarations This article does not contain any studies with human participants or animals performed by any of the authors. Competing interests The authors declare that they have no competing interests. Footnotes 1For a short introduction to the most known cryptocurrency, the Bitcoin, we refer to Böhme et al. (2015) and for an overview on others we refer to Berentsen and Schär (2018). 2For a review and more examples, we refer to Badawi and Jourdan (2020). 3Defined as the premium a trader has to pay to liquidate a given amount of assets. 4For example in Raberto et al. (2005) a Gaussian assumption is employed, which is also used in cryptocurrency setting (Cocco and Marchesi 2016; Cocco et al. 2017, 2019). Several studies relaxed the Gaussian assumption with either a log-normal assumption (Bartolozzi 2010), or a power-law assumption (Cui and Brabazon 2012; McGroarty et al. 2019).
Therefore, if seemingly legal fraudulent trades of large volumes are executed on one exchange, then the reported price will be skewed by the activity of this exchange, diminishing the influence of the other exchanges. It is clear that if fraudulent buy orders are matched with sell orders with high limit prices, the calculated Bitcoin market price will consequently be pushed higher than the average price traded on other exchanges. A second way the activity on one exchange can influence the whole market is by traders observing price fluctuations on multiple exchanges and generating a profit by taking advantage of these small price differences. It was concluded in Chordia et al. (2008) that such an arbitrage activity, if stimulated by sufficient liquidity, results in higher price efficiency, which, in turn, results in a more stable market price unless new external information enters the market. However, in Marshall et al. (2018), analyzing a database of Bitcoin intraday data on 14 exchanges, including prices of 13 currencies, it was observed that cryptocurrency markets tend to be illiquid and hence less price-efficient. This means that there is a lower overall agreement on the price of Bitcoin. From this, it can be concluded that the variations in price across all major exchanges, given the low liquidity of Bitcoin, can increase price volatility. Indeed, in the same study, evidence shows that an increase in illiquidity corresponds with an increase in crash risk across all pairs when liquidity proxies are either the effective spread or price impact. This volatility–liquidity relationship was confirmed by some studies (Næs and Skjeltorp 2006; Tripathi et al. 2020; Valenzuela et al. 2015) from a quantitative point of view. Based on this argument, one might expect ascendancy among different cryptocurrency exchanges. The earliest study to investigate this question is Brandvold et al. (2015).
yes
Cryptocurrency
Can Bitcoin and other cryptocurrencies be manipulated easily?
yes_statement
"bitcoin" and other "cryptocurrencies" can be "easily" "manipulated".. manipulating "bitcoin" and other "cryptocurrencies" is easy.
https://www.forbes.com/sites/javierpaz/2022/08/26/more-than-half-of-all-bitcoin-trades-are-fake/
More Than Half Of All Bitcoin Trades Are Fake
More Than Half Of All Bitcoin Trades Are Fake A new Forbes analysis of 157 crypto exchanges finds that 51% of the daily bitcoin trading volume being reported is likely bogus. Within the emerging and turbulent market for cryptocurrencies, where there are no fewer than 10,000 tokens, bitcoin, is the great granddaddy, the blue-chip, representing 40% of the $1 trillion in crypto assets outstanding. BitcoinBTC is crypto’s gateway drug. An estimated 46 million adult Americans already own it according to New York Digital Investment Group, and an increasing number of institutional investors and corporations are warming to the nascent alternative asset. But can you trust what your crypto exchange or e-brokerage reports about trading in the most important digital currency? One of the most common criticisms of bitcoin is pervasive wash trading (a form of fake volume) and poor surveillance across exchanges. The U.S. Commodity Futures Trading Commission defines wash trading as “entering into, or purporting to enter into, transactions to give the appearance that purchases and sales have been made, without incurring market risk or changing the trader's market position.” The reason why some traders engage in wash trading is to inflate the trading volume of an asset to give the appearance of rising popularity. In some cases trading bots execute these wash trades in tokens, increasing volume, while at the same time insiders reinforce the activity with bullish remarks, driving up the price in what is effectively a pump and dump scheme. Wash trading also benefits exchanges because it allows them to appear to have more volume than they actually do, potentially encouraging more legitimate trading. There is no universally accepted method of calculating bitcoin daily volume, even among the industry’s most reputable research firms. For instance, as of this writing, CoinMarketCap puts the latest 24-hour trading of bitcoin at $32 billion, CoinGecko at $27 billion, Nomics at $57 billion and Messari at $5 billion. Adding to the challenges are persistent fears about the solvency of crypto exchanges, underscored by the public collapses of Voyager and Celsius. In an exclusive interview with Forbes in late June, FTX CEO Sam Bankman-Fried commented that there are many exchange bankruptcies yet to come. A significant repercussion of this lack of faith in its underlying markets is the Security and Exchange Commission’s refusal to approve a spot bitcoin ETF. Unfortunately for the bitcoin ETF hopefuls, many of these fears and criticisms are valid. As part of Forbes research into the crypto ecosystem using 2021 data, we ranked the 60 best exchanges in March. More recently we conducted a deeper-dive into the bitcoin trading markets to answer a few pressing questions: More than half of all reported trading volume is likely to be fake or non-economic. Forbes estimates the global daily bitcoin volume for the industry was $128 billion on June 14. That is 51% less than the $262 billion one would get by taking the sum of self-reported volume from multiple sources. TetherUSDT, the world’s largest stablecoin, continues to be a dominant player in the crypto trading economy, especially when it comes to trades against bitcoin. Its current market capitalization is $68 billion, despite questions about its reserves. In terms of how much bitcoin activity takes place at these firms, 21 crypto exchanges generate $1 billion or more in daily trading activity, while the next 33 exchanges had volume between $200 million and $999 million across all contract types, spot, futures and perpetuals. Perpetual futures, or perpetual swaps as they are also known, are futures contracts that don’t require investors to roll over their positions. Binance is the clear leader, with a 27% market share, followed by FTX. Looking only at spot bitcoin, the top position is shared by Binance, FTX, and OKX. Chicago-based CME Group is the market leader in bitcoin futures trading. The biggest problem areas regarding fake volume are firms that tout big volume but operate with little or no regulatory oversight that would make their figures more credible, notably Binance, MEXC Global and Bybit. Altogether, the lesser regulated exchanges in our study account for approximately $89 billion of the true volume (they claim $217 billion). The creation of new trading assets and products such as stablecoins and perpetual futures adds complications for national authorities seeking to regulate crypto markets. Major U.S. exchanges hardly utilize these instruments or contracts in any of their trading. However, offshore exchanges make significant use of them as ways to synthetically create U.S. dollar liquidity on their platforms (they cannot get U.S. bank accounts). In the Western world and particularly in the U.S., it is tempting to think of bitcoin only trading against either the U.S. dollar or the euro and British pound. But some of the largest trading pair activity occurs against fiat currencies like the Japanese yen and Korean won and against major stablecoins like Binance U.S. dollar and the USD coin. 573 million people visit crypto exchange websites on a monthly basis. We hope that this report builds on top of the important work done by other digital asset researchers such as Bitwise, which estimated in a March 2019 white paper that 95% of CoinMarketCap’s bitcoin trading volume was fake and/or non-economic. Our Approach Forbes uses quantitative and qualitative analyses to adjust trading volume reported by the exchanges. Unlike other methods that carry out tests on transactional data (and can also be duped), Forbes grades a firm’s credibility by evaluating no fewer than five datasets that together inspire or diminish confidence in a firm’s self-reported data. Data comes from four crypto media firms, CoinMarketCap, CoinGecko, Nomics and Messari, as well as multiple exchanges and two other third-party data providers. We apply volume discounts based on a proprietary methodology that relies on 10 factors such as an exchange’s home regulator if any and volume metrics based on an exchange’s web traffic and estimated workforce size. We also use the number and quality of crypto licenses as proxy to gauge the sophistication of each crypto exchange in matters pertaining to regulation and trade surveillance. If a firm shows a commitment to transparency by conducting token proofs of reserve or by participating in Forbes crypto exchange surveys, it qualifies for a “transparency credit” that lowers any discount that may otherwise apply. Many of these factors were also present in Forbes’ crypto exchange ranking formula. We divided them into three categories: Group 1: 49 crypto exchanges that were assigned discounts of 0-25% generated $39 billion of real bitcoin trading activity across all markets–spot, derivatives and futures–on June 14. Group 3: The remaining 35 firms were penalized with a high discount rate (80-99%) and traded $7.7 billion out of $59 billion claimed. THE FORBES DISCOUNT RATE - JUN 2022 CRYPTO EXCHANGE GROUPS BY DISCOUNT RATE Exchanges sorted by group and Forbes calculated volume, Jun 2022* SUMMARY CHARTS & TABELS Despite crypto’s global nature, spot bitcoin trading activity is centered around relatively few currency pairs and stablecoins. Stablecoin USDT is the biggest, followed by the U.S. dollar. The next biggest fiat assets are the yen and won. THE FORBES REAL BITCOIN TRADING VOLUME BTC-US DOLLAR Daily Volume Group 1 exchanges, many of which are based in the U.S., provide $24.3 billion in daily USD-BTC liquidity, and Group 2 exchanges add $17.3 billion. The prominence of Group 1 exchanges as the main source of BTC-USD occurs across spot, perpetuals, and futures contracts. CME Group is the leading provider of bitcoin futures globally, with$2.1 billion of USD-BTC futures changing hands daily. There are at least 27 crypto exchanges–12 in Group 1–that have daily BTC-USD liquidity greater than $5 million. BITCOIN - U.S. DOLLAR (USD) TRADING ACTIVITY Daily real volume in $ million by crypto exchange group, Jun 14, 2022 BTC - U.S. TETHER Daily Volume At $71.4 billion daily volume, bitcoin-tether (BTC-USDT) activity exceeds that of BTC-USD by 57%, with 79% generated by Group 2 crypto exchanges and 5% by those in Group 3. There are 77 exchanges–44 in Group 2, 12 in Group 1–with daily bitcoin-tether volume above $5 million. Tether is prominent across spot and perpetual futures markets, less so among the regulated futures industry, which is largely absent outside of the U.S. BITCOIN - TETHER (USDT) TRADING ACTIVITY Daily real volume in $ million by crypto exchange group, Jun 14, 2022 BTC - U.S. DOLLAR COIN Daily Volume U.S. dollar coin (USDCUSDC) is gaining adoption in the stablecoin arena. Daily liquidity for bitcoin-USDC was $2.15 billion, with Groups 1 and 2 splitting that total 39% and 60%, respectively. An interesting observation is that Group 2 exchanges use USDC actively in the spot bitcoin market whereas Group 1 exchanges do so with perpetuals. This different use could suggest that Group 2 exchanges may be open to the idea of supporting an alternative to tether’s dominance in the stablecoin market. USDT and Binance USD (BUSDBUSD) each generate more volume than USDC, but the latter now has 26 crypto exchanges (17 in Group 2) with daily trading volume of $5 million or more, versus 77 exchanges for USDT and five with BUSD. If tether’s prominence begins to wane, USDC could be the stablecoin most likely to pick up its crown. BITCOIN - U.S. DOLLAR COIN (USDC) TRADING ACTIVITY Daily real volume in $ million by crypto exchange group, Jun 14, 2022 CRYPTO EXCHANGES BY REAL TRADING VOLUME TOP-10 CRYPTO EXCHANGES BY OVERALL REAL BITCOIN VOLUME Daily real bitcoin volume by leading firm in $ millions, Jun 14, 2022 Bitcoin Trading Volume by Exchange Group The top-10 Group 1 crypto exchanges by volume originate from across the world, with three from the U.S. (CME Group, Coinbase, Kraken), one from Singapore (Crypto.com), one from Europe (LMAX Digital), four from financial offshore centers (FTX, OKX, Gate.io, BitMEX), and one from Central America (Deribit). Among Group 1 firms, FTX is the largest and growing at a fast clip. It wasn’t until mid 2021 when institutional funding fueled a transformation of FTX operations from a midsized unregulated exchange focused on offshore crypto derivatives to a global group of exchanges today regulated in the U.S., Japan, Europe and elsewhere. In addition to derivatives, FTX trades in crypto spot, tokenized stocks and has recently added equities. LEADING GROUP 1 CRYPTO EXCHANGES Group 2 crypto exchanges tend to be large and possess wide product offerings. They primarily focus on growth and tend to have much less interest in being regulated where they operate. They also generally lack robust ways to track and deter wash trading. Binance is by far the largest crypto exchange in Group 2, with $34.2 billion of daily trading activity followed by Bybit with $8.9 billion. The majority of these exchanges are based in offshore havens such as the Seychelles and British Virgin Islands. LEADING GROUP 2 CRYPTO EXCHANGES Group 3 consists of 36 crypto exchanges which, with few exceptions, are unregulated and small. Their huge self reported volume and tiny visitor number cast doubt on the possibility that a limited audience could indeed generate that much trading activity. A case in point is BitCoke, which CoinMarketCap identifies as a Hong Kong-based, Cayman Island-domiciled exchange that purportedly generated $14 billion daily–mostly from BTC-USDT perpetuals. SimilarWeb, however, indicates that the exchange’s domain receives less than 10,000 monthly visitors–with 53% coming from Argentina alone. The discrepancies in volume versus traffic plus lack of regulatory credentials result in Forbes discounting this firm’s volume by 95% to $702 million. CRYPTO EXCHANGE MONTHLY VISITS APR 2022 Visits in millions by group - Four exchanges with more than 20 million visitors excluded (Binance, Coinbase, Bybit, FTX) LARGEST EXCHANGES BY MAJOR BITCOIN PAIR As discussed above, BTC/USD and BTC/USDT are by far the biggest spot pairs for bitcoin, but there are a few other pairs worth mentioning. The next largest are BTC-KWR, BTC-JPYPY, BTC-USDC, and BTC-EUR. An exchange’s decision to offer base assets across bitcoin, especially when it comes to fiat, usually comes down to the local fiat currency used by an exchange’s client base. Each of the companies trading bitcoin against the won or yen are based in South Korea or Japan respectively. USDC, by nature of its blockchain-based DNA, is easier to cross national-boundaries. Readers may notice that Kraken, Binance or Coinbase are not based in Europe, though they each have a series of licenses to operate in certain countries. They each offer euro trading as a way to onboard new users, but unlike the South Korea or Japan-based exchanges, the euro is not their most dominant base asset for trading. TOP CRYPTO EXCHANGES - SELECT SPOT BITCOIN PAIRS Spot Bitcoin Forbes True Volume in $millions, Jun 14, 2022 However, while eight pairs by volume garner the majority of bitcoin volume, there are dozens of other varieties trading at obscure exchanges uncounted even in our present study. For example, it is difficult to find the amount of BTC-NGN (Nigerian naira) volume traded in Nigeria because crypto data firms like Nomics, CoinMarketCap and CoinGecko generally do not track it. One can safely assume that local crypto exchanges not widely known outside of Nigeria capture most BTC-NGN liquidity, which is likely true for many other exchanges operating in emerging markets. LARGEST SPOT BITCOIN CRYPTO EXCHANGES Bitcoin Forbes Real Volume in $ millions, Jun 14, 2022 These observations are largely true when it comes to perpetual futures as well. However, the won and the yen do not appear to have gained significant market share in this area. LARGEST BITCOIN PERPETUALS CRYPTO EXCHANGES Bitcoin Forbes Real Volume in $ millions, Jun 14, 2022 Finally, when it comes to the traditional futures markets, such as those that offer regular monthly expirations, the only two pairs that seem to matter are BTC-USD and BTC-USDT. LARGEST BITCOIN FUTURES CRYPTO EXCHANGES Bitcoin Forbes Real Volume in $ millions, Jun 14, 2022 KEY TAKEAWAYS The Forbes Real Volume study revealed a number of key insights for crypto investors and industry. Bitcoin may just be the beginning of the problem. If reported trading volumes for bitcoin, the most regulated and closely-watched crypto asset around the world, are untrustworthy, then metrics for even smaller assets should be taken with even greater grains of salt. At its best, trading volume is one of the most measurable signs of investor interest, but it can be easily manipulated to convince novice investors that it has much more demand than it actually does. Binance remains the 800-lb elephant in the room. Even after a 45% discount on its volume, Binance still generates the equivalent of 27.3% of all “real” trading volume. There is no other crypto exchange that can match its market power, and it's been that way for the past two years. That said, while Binance has been saying all of the right things about cooperating with regulators - it has started getting licenses around the world and is promising to announce a global headquarters - questions remain about its operational controls. Unless regulators can get comfortable with Binance’s legitimacy, it may be difficult to envision a spot ETF getting approved anytime soon. Tether remains “Too Big To Fail” - for now: This study invites more questions about the true use and value of two of the largest stablecoins - USDT and BUSD. Say what you will about Tether, and people have, it has found product-market fit in a big way. But that is the exact problem in the minds of many so-called Tether Truthers, who do not believe that the $68 billion is actually backed by reserves. It is hard to imagine what would happen to markets if traders stopped trusting tether - and to be fair there is little evidence that this is happening - and none of its competitors were willing to take its place. Areas For Future Study The role of stablecoins in market manipulation. We did not see any evidence that tether-based trading pairs were any more prone to fraud than other assets. However, this area is worth looking into further, especially if tether begins to deviate again from its $1 peg or other algorithmic stablecoins begin to gain traction in large spot-market trading. An ostensibly stable base asset that has higher-than-expected volatility can always lead to both legitimate arbitrage opportunities as well as openings for fraud. The potential of perpetual futures to be manipulated. Through our research, including first-person interviews with direct market participants, we did not see any evidence that perpetual futures are more prone to wash trading and other forms of manipulation than conventional futures or spot contracts. However, given the relatively novel nature of this product (it was created in 2016), as well as its dominance in crypto trading, it is well worth deeper study. The future of DEXS in market manipulation. This report did not focus on decentralized exchanges (DEXs), in large part due to the fact that they are not major players in bitcoin trading. To the contrary, when it comes to spot markets most of the major players have separated themselves from the major centralized exchanges by specializing in novel ways to provide liquidity in long-tail assets that are not financially worthwhile for many traditional exchanges to offer. That said, the market share of DEXs has slowly been creeping up to that of spot–there are even days where Uniswap, the largest DEX, has more trading volume than Coinbase. FORBES METHODOLOGY The Forbes methodology for discounting bitcoin trading volume follows a series of steps. Regulation. We identify crypto licenses and from what regulatory body that each exchange possesses and use that as proxy to gauge their level of sophistication and intent to deter wash trades and publishing fake volume. Third-party input. We considered the work of select third parties such as volume data from CoinMarketCap, CoinGecko, Nomics and Messari. Messari’s volume statistics are less extensive by pairs, and it has fewer exchanges than its peers, but it has its own real-volume calculations. Forbes tracked in recent months how Messari applied a volume discount ranging from 40% to 65% to Binance volume, compared with the averages reported by CoinMarketCap, CoinGecko and Nomics at the time. Messari also discounts the trading volume of FTX by a lesser percentage (less than 20%) and that of Kraken by 99%. With regards to this latter, Forbes doesn’t share the view of applying a heavy discount to a firm that is among the most regulated crypto exchanges in the world. Most exchanges going through the Messari real volume analysis, however, lack any type of volume discount.** Web traffic. Forbes employs third-party data from web analytics firm SimilarWeb to heavily discount the volume of firms claiming a high trading volume without having sufficient crypto licenses and web traffic to generate such volume. Forbes interviews. Forbes has conducted dozens of interviews of senior executives at major crypto exchanges to supplement quantitative information on a firm’s profile. * Editor’s note. After publication, Bullish.com provided Forbes non public information, such as that Bullish is in the process of moving to an institutional-only focus from the present one which appears to be retail and institutional. For the original ranking Bullish received a discount of 90% on bitcoin volume, however considering its institutional focus and other factors a discount of 15% is more appropriate. ** Editor’s note II. After publication, Messari notified Forbes that parts of its website experienced a glitch (now resolved) showing only a subset of Kraken’s trading volume; the firm also reaffirmed that it does not discount the volume of Kraken, FTX, or Binance.
More Than Half Of All Bitcoin Trades Are Fake A new Forbes analysis of 157 crypto exchanges finds that 51% of the daily bitcoin trading volume being reported is likely bogus. Within the emerging and turbulent market for cryptocurrencies, where there are no fewer than 10,000 tokens, bitcoin, is the great granddaddy, the blue-chip, representing 40% of the $1 trillion in crypto assets outstanding. BitcoinBTC is crypto’s gateway drug. An estimated 46 million adult Americans already own it according to New York Digital Investment Group, and an increasing number of institutional investors and corporations are warming to the nascent alternative asset. But can you trust what your crypto exchange or e-brokerage reports about trading in the most important digital currency? One of the most common criticisms of bitcoin is pervasive wash trading (a form of fake volume) and poor surveillance across exchanges. The U.S. Commodity Futures Trading Commission defines wash trading as “entering into, or purporting to enter into, transactions to give the appearance that purchases and sales have been made, without incurring market risk or changing the trader's market position.” The reason why some traders engage in wash trading is to inflate the trading volume of an asset to give the appearance of rising popularity. In some cases trading bots execute these wash trades in tokens, increasing volume, while at the same time insiders reinforce the activity with bullish remarks, driving up the price in what is effectively a pump and dump scheme. Wash trading also benefits exchanges because it allows them to appear to have more volume than they actually do, potentially encouraging more legitimate trading. There is no universally accepted method of calculating bitcoin daily volume, even among the industry’s most reputable research firms. For instance, as of this writing, CoinMarketCap puts the latest 24-hour trading of bitcoin at $32 billion, CoinGecko at $27 billion, Nomics at $57 billion and Messari at $5 billion.
yes
Cryptocurrency
Can Bitcoin and other cryptocurrencies be manipulated easily?
yes_statement
"bitcoin" and other "cryptocurrencies" can be "easily" "manipulated".. manipulating "bitcoin" and other "cryptocurrencies" is easy.
https://www.investopedia.com/news/could-cryptocurrencies-replace-cash-bitcoin-flippening/
Could Cryptocurrencies Replace Cash?
At the beginning of the cryptocurrency boom, Bitcoin seemed to be the unquestioned leader. Up until early this year, Bitcoin accounted for the vast majority of the industry’s market capitalization; then, in a span of just weeks, Ethereum, Ripple, and other currencies rushed to catch up. While Bitcoin is still in the lead, the rapid turnover in the industry has some analysts debating if cryptocurrencies are actually currencies. Some are predicting that even bigger changes could be ahead. Among them? The idea that cryptocurrencies could come to replace cash entirely. Possible Advantages to a Crypto Future A report by Futurism highlights some of the possible outcomes, should cryptocurrencies surpass fiat currencies at some point in the future. One important consideration is that cryptocurrencies cannot be manipulated quite as easily as fiat currency, largely due to their decentralized and unregulated status. Beyond that, cryptocurrencies could better support the concept of a universal basic income than fiat currencies would. As a matter of fact, some programs have already experimented with the use of cryptocurrencies as means of distributing a universal basic income. Further, cryptocurrencies could help to get rid of intermediaries in everyday transactions. This could cut costs for businesses and help out consumers. Possible Concerns if Cryptocurrencies Replace Cash Of course, there are also some huge challenges and concerns with this scenario. If cryptocurrencies outpace cash in terms of usage, traditional currencies will lose value without any means of recourse. Should cryptocurrencies take over entirely, new infrastructure would have to be developed in order to allow the world to adapt. There would inevitably be difficulties with the transition, as cash could become incompatible quite quickly, leaving some people with lost assets. Established financial institutions would likely have to scramble to change their ways. It is important to note that while the initial Bitcoin-mania saw quite a few businesses offer to accept the cryptocurrency, that list has steadily dwindled brining back the skepticism about its use a medium of exchange. Beyond the impact of a cryptocurrency future on individual consumers and on financial institutions, governments themselves would suffer. Governmental control over central currencies is key to regulation in many ways, and cryptocurrencies would operate with much less government purview. Governments could no longer, for example, determine how much of a currency to print in response to external and internal pressures. Rather, the generation of new coins or tokens would be dependent upon independent mining operations. Regardless of how individual investors may feel about the prospect of a switch from standard cash to cryptocurrencies, it is likely out of anyone’s hands. Of course, with ample speculation abounding that the cryptocurrency industry is a bubble that is destined to pop, it’s also possible that predictions of a crypto future could be overblown. What is difficult for investors is that, as with all things crypto-related, changes happen incredibly quickly, and predicting them is always tough. The offers that appear in this table are from partnerships from which Investopedia receives compensation. This compensation may impact how and where listings appear. Investopedia does not include all offers available in the marketplace.
At the beginning of the cryptocurrency boom, Bitcoin seemed to be the unquestioned leader. Up until early this year, Bitcoin accounted for the vast majority of the industry’s market capitalization; then, in a span of just weeks, Ethereum, Ripple, and other currencies rushed to catch up. While Bitcoin is still in the lead, the rapid turnover in the industry has some analysts debating if cryptocurrencies are actually currencies. Some are predicting that even bigger changes could be ahead. Among them? The idea that cryptocurrencies could come to replace cash entirely. Possible Advantages to a Crypto Future A report by Futurism highlights some of the possible outcomes, should cryptocurrencies surpass fiat currencies at some point in the future. One important consideration is that cryptocurrencies cannot be manipulated quite as easily as fiat currency, largely due to their decentralized and unregulated status. Beyond that, cryptocurrencies could better support the concept of a universal basic income than fiat currencies would. As a matter of fact, some programs have already experimented with the use of cryptocurrencies as means of distributing a universal basic income. Further, cryptocurrencies could help to get rid of intermediaries in everyday transactions. This could cut costs for businesses and help out consumers. Possible Concerns if Cryptocurrencies Replace Cash Of course, there are also some huge challenges and concerns with this scenario. If cryptocurrencies outpace cash in terms of usage, traditional currencies will lose value without any means of recourse. Should cryptocurrencies take over entirely, new infrastructure would have to be developed in order to allow the world to adapt. There would inevitably be difficulties with the transition, as cash could become incompatible quite quickly, leaving some people with lost assets. Established financial institutions would likely have to scramble to change their ways. It is important to note that while the initial Bitcoin-mania saw quite a few businesses offer to accept the cryptocurrency, that list has steadily dwindled brining back the skepticism about its use a medium of exchange.
no
Cryptocurrency
Can Bitcoin and other cryptocurrencies be manipulated easily?
yes_statement
"bitcoin" and other "cryptocurrencies" can be "easily" "manipulated".. manipulating "bitcoin" and other "cryptocurrencies" is easy.
https://newrepublic.com/article/160905/tether-cryptocurrency-scam-enrich-bitcoin-investors
Is the Cryptocurrency Tether Just a Scam to Enrich Bitcoin Investors ...
Is Tether Just a Scam to Enrich Bitcoin Investors? A widely used cryptocurrency can’t escape investigation and controversy—and it may be fueling another coin bubble. Shutterstock Premier Consumers who invest in cryptoassets “should be prepared to lose all their money,” the U.K’s Financial Conduct Authority warned cryptocurrency investors on Monday. That message came amid an 11 percent decline in the price of Bitcoin, the industry’s premier asset, whose value had tripled in the last three months. But it’s a warning that might also apply to holders of a cryptocurrency that’s supposed to be one of the pillars of this innovative financial system. Tether, the third-most widely held coin by value (Ethereum is second), is unique among its peers. In a market built largely on speculation, Tether is a stablecoin, pegged to the dollar at a 1-to-1 ratio. Tethers help provide liquidity and offer a widely recognized token that can facilitate transactions between various cryptocurrencies. In the world of crypto markets, they essentially act as a digital dollar, and they’re everywhere. On some days, Tether’s trading volume exceeds that of Bitcoin. But the question that hounds Tether—and is the subject of an investigation by the New York attorney general’s office—is whether its most attractive quality is really just to artificially inflate the value of Bitcoin. In other words, is Tether actually a tool for cryptocurrency insiders to get rich on the market’s hottest—and highly manipulable—commodity? Tether, which was founded under the brand name Realcoin in 2014, isn’t decentralized like Bitcoin or many other cryptocurrencies: One company owns, mints, and manages the Tether supply, which means it’s also not transparent. And Tether isn’t scarce; unlike currencies that are “mined,” its production isn’t bound by math and code that titrate the supply. Tether Limited, the company behind the eponymous coin, can mint as many coins as it wants. From there, it can use its own currency—and its relationship with Bitfinex, a cryptocurrency exchange also managed by Tether Limited’s executives—to buy other cryptocurrencies, conduct unregulated trading, and even potentially launder money. While Tether claims that it mints new coins in response to need—for example, I give Tether $100,000, and it, in return, gives me 100,000 USDT, as Tethers are called—its most pointed critics argue otherwise. High-powered lawyers, jaundiced traders, rogue economists, industry whistleblowers, crypto gadflies, and several U.S. law enforcement agencies claim that Tether is part of an elaborate scam that essentially boils down to using the company’s in-house currency to buy Bitcoin, which has the intended side effect of juicing the price of Bitcoin, and to otherwise manipulate cryptocurrency markets. As a document from one lawsuit warns, “control of an exchange and the opportunity to trade with non-existent money can allow a single individual or entity to dramatically influence cryptocommodity prices.” If you believe New York Attorney General Letitia James’s court filings, there’s a great deal of support for the accusations, and we may soon find out more. Bitfinex and Tether face a January 15 deadline to transfer millions of pages of documents to James’s office. Tether is also facing a major class-action lawsuit accusing it of contributing to “the largest bubble in human history”: In 2017, Tether printed a flurry of its currency in patterns that appeared to be linked to rises in Bitcoin, as an influential scholarly paper later found. The bubble popped, with Bitcoin losing 45 percent of its value across five days in December 2017. Billions of dollars of value disappeared almost overnight, with the decline continuing through 2018. Tether’s importance, and its value to the overall crypto economy, has vastly increased since then, when only a few billion Tethers were in circulation. Now there are more than 24 billion Tethers out there. In the first week of January, Tether printed more than 2 billion USDT. (It’s worth noting that it’s exceedingly hard to redeem USDT from Tether Limited for U.S. dollars; Tether requires a $100,000 minimum per transaction, along with a 0.1 percent fee.) The manic production of Tethers has become a joke online. Posts from accounts that monitor large cryptocurrency transactions, such as @glassnodealerts and @whale_alert, attract sardonic replies from people accusing the company of running a Ponzi scheme and rocket ship emojis from traders who want to see the company pump the Bitcoin market even more. A popular meme shows a photo of a speeding armored truck bedecked in the Tether logo, its doors flung open, money flying into the air. The jokes about manipulating the market and Tether’s seeming print-at-will attitude have gotten so loud that Paolo Ardoino, the CTO of Bitfinex and Tether, will respond to a @whale_alert message to explain why, for instance, Tether is printing $400 million worth of its currency at 8 a.m. on a Saturday. Whatever the investigation in New York turns up, Tether’s short history is already replete with strange criminalcharacters, unsolved hacks, sudden switches between overseas banks, and huge, unexplained losses. There’s likely more to come. As a connective tool for the larger crypto economy, the potential of Tether was clear. The class-action lawsuit puts it simply: “Tether’s promises were the foundation of USDT’s value. If Tether were telling the truth, a USDT would combine the best aspects of fiat currency and crypto-assets: It would be stable and safe like the U.S. dollar but also, like other crypto-assets, easily transferable across different crypto-exchanges, and free from many government regulations.” That perception of stability was always a myth. In 2016, someone hacked into Bitfinex and stole 120,000 Bitcoins, which resulted in Bitfinex cutting more than a third of the value off each customer’s account—although, reportedly, not for a favored few. Tether had long claimed that for every USDT it put into circulation, it would have one U.S. dollar in the bank. But after years of evasions and refusals to release a complete audit of its finances, a Tether lawyer finally admitted, in a 2019 court filing, that Tether was only 74 percent backed—a number that seemed to include cash, securities, Bitcoin, and other money owed to Tether. Tether’s continued refusal to fully audit itself, combined with its feverish printing of new coins, has led many critics to question even this 74 percent number. Then, last August, John M. Griffin and Amin Shams, two academics who study cryptocurrencies, published the final version of a paper that had been attracting great attention in the cryptocurrency world since it was published in an earlier form in June 2018. Their 119-page study, “Is Bitcoin Really Un-Tethered?” analyzed flows of Tether and Bitcoin, finding that half the movement in Bitcoin prices during part of the 2017 bubble were driven by “one entity.” As the academics stated, “we find that purchases with Tether are timed following market downturns and result in sizable increases in Bitcoin prices.” Griffin and Sham’s analysis also suggested that Tether wasn’t sufficiently backed and that the company might be printing coins and moving assets around to cover holes in its balance sheet. “Tether claimed our paper was incorrect,” said Griffin in a phone call. “But we appreciate the fact that due to the work of the New York AG, the lawyer on record admitted their currency was unbacked. Tether has confirmed the main finding of our paper.” (Paolo Ardoino did not respond when contacted for comment via Twitter, but Tether general counselStuart Hoegner issued a statement to TNR calling the study “roundly discredited.” He claimed that “there is no causal relationship between the issuance of Tethers and market movements up or down” and that “Tether is always 100% backed by Tether reserves, which include traditional currency and cash equivalents.”)* The trouble with Tether is not just one fly-by-night company with opaque financial dealings. Bitfinex and Tether’s web of relationships extends throughout the cryptocurrency world, encompassing numerous exchanges, wealthy traders, and unaccountable executives living in the margins between legal jurisdictions. Even Bitfinex has portrayed itself as a victim of yet another concern, a Panamanian “shadow bank” called Crypto Capital that handled money for major crypto exchanges—until some of its backers, including former NFL owner Reginald Fowler, were arrested on embezzlement charges. Bitfinex maintains that Crypto Capital made off with $850 million of its money but that the two companies never even had a written contract. (The New York Attorney General has alleged that Bitfinex used Tether funds to cover up the shortfall.) If Tether’s critics are right and this is a rehash of the 2017 bubble—but bigger—how long can the company keep pumping the Bitcoin market while multiple investigations bear down on it? And if the price of Bitcoin can be manipulated—by a company that simply prints digital money (not unlike the Federal Reserve’s practice of quantitative easing, a policy despised by Bitcoiners)—doesn’t that undercut one of the core selling points of Bitcoin? “If you believe the asset is riskless for long enough, it will find itself in the infinite variety of structures which need a riskless asset,” wrote Patrick McKenzie, a Silicon Valley engineer, in an analysis of Tether. “And when those structures suddenly have a hole where their riskless asset should be, calamity quickly follows.” The danger for the crypto market is that that hole might soon appear. The Treasury Department has signaled interest in further regulating stablecoins. (Tether has been used for money-laundering and in attempts to bribe Department of Justice officials.) The class-action lawsuit’s discovery process may force Tether to reveal more about its internal operations and decision-making, along with its murky banking relationships. At its most devastating, this array of investigations and legal and regulatory threats could bring down Bitfinex and Tether entirely and cause billions of dollars of investor losses. Should Tether collapse, via government crackdown or a run on the Tether bank, the prices of Bitcoin—which, as of this writing, has a market capitalization of more than $639 billion—and other cryptocurrencies may plummet. Tens of billions of dollars in investments will disappear—from institutional investors who can take the hit, yes, but also from thousands of everyday people who decided to follow the crypto boom and put their assets into Bitcoin. Cryptocurrencies have a certain unreality to them, but the damage would be widespread and very real.
* The trouble with Tether is not just one fly-by-night company with opaque financial dealings. Bitfinex and Tether’s web of relationships extends throughout the cryptocurrency world, encompassing numerous exchanges, wealthy traders, and unaccountable executives living in the margins between legal jurisdictions. Even Bitfinex has portrayed itself as a victim of yet another concern, a Panamanian “shadow bank” called Crypto Capital that handled money for major crypto exchanges—until some of its backers, including former NFL owner Reginald Fowler, were arrested on embezzlement charges. Bitfinex maintains that Crypto Capital made off with $850 million of its money but that the two companies never even had a written contract. (The New York Attorney General has alleged that Bitfinex used Tether funds to cover up the shortfall.) If Tether’s critics are right and this is a rehash of the 2017 bubble—but bigger—how long can the company keep pumping the Bitcoin market while multiple investigations bear down on it? And if the price of Bitcoin can be manipulated—by a company that simply prints digital money (not unlike the Federal Reserve’s practice of quantitative easing, a policy despised by Bitcoiners)—doesn’t that undercut one of the core selling points of Bitcoin? “If you believe the asset is riskless for long enough, it will find itself in the infinite variety of structures which need a riskless asset,” wrote Patrick McKenzie, a Silicon Valley engineer, in an analysis of Tether. “And when those structures suddenly have a hole where their riskless asset should be, calamity quickly follows.” The danger for the crypto market is that that hole might soon appear. The Treasury Department has signaled interest in further regulating stablecoins. (Tether has been used for money-laundering and in attempts to bribe Department of Justice officials.) The class-action lawsuit’s discovery process may force Tether to reveal more about its internal operations and decision-making, along with its murky banking relationships.
yes
Cryptocurrency
Can Bitcoin and other cryptocurrencies be manipulated easily?
yes_statement
"bitcoin" and other "cryptocurrencies" can be "easily" "manipulated".. manipulating "bitcoin" and other "cryptocurrencies" is easy.
https://www.linkedin.com/pulse/common-crypto-manipulation-techniques-nitin-kumar-
Spotting the 5 Common Crypto Price Manipulation Patterns
Nitin Kumar This article sheds light on the common manipulation techniques in the cryptocurrency world, it will educate readers on spotting common trends and analyzes the Bitcoin drop on July 20th, 2021. Introduction Market manipulation has existed for as long as tradable assets have existed and cryptocurrencies are no exceptions to it. The cryptocurrency space is also immature with nascent regulations making it vulnerable to market manipulations not easily possible with mature markets. In this article, I will dive into some of the common market manipulation techniques, identify patterns around it and equip people to spot abusive behavior. There are numerous forces at work every day in the crypto markets targeting price manipulation to spook newbie investors and inexperienced traders to panic and play right into the hands of these manipulators. Manipulation 101 The manipulation phenomenon is not exclusive to cryptocurrencies, these tactics have been outlawed by the SEC in mature markets where regulations are established. Stringent monitoring, reporting, and auditing requirements create risk for those who perpetrate them. Mature markets also have well-developed mechanisms to quickly identify and prosecute miscreants. This is far from where the crypto world is today, unregulated, anonymous and people with large holdings i.e., whales can act with impunity. While this might appear like a hot mess and a massive problem, it is not viewed that way by the proponents of the new economy. Cryptocurrencies are about financial freedom, free from the structures and barriers of the opaque old economy. This also means it is an opportunity for users to assume individual responsibility for their finances, hence they must manage the risks on their own. However, no one wants to be manipulated and one needs to get properly educated about strategic and tactical methods of manipulation to defend against them. Let us examine the commonly used manipulation techniques in cryptocurrency markets. Pump and Dump The most pervasive technique used in the crypto markets today is pump and dump, it also has one of the highest impacts. It is insiders or other core market participants trying to pump up the value of a coin until it gains attention. Once traders and investors jump into the market, the group dumps the coin for a neat profit. The technique was erstwhile deployed on penny stocks, but low liquidity altcoins are a perfect target in the cryptocurrency space. A low market cap shitcoin can be pumped easily and a lot of this manipulation is well coordinated by hundreds and thousands of users who come together on Reddit, telegram, etc. to hatch the plot – some of these also have obvious names like rocket pump, etc. It is also impossible to predict the exact time of the pump or the dump and this tactic does hurt folks late to the pump, late to the dump, or even those who participated in it. There are multiple patterns to analyze to spot a pump and dump scheme. First, most of the pump and dump occurs in low market cap coins out of the top 100 list, exceptions also create pump and dump in high caps although rare. Specifically, vulnerable are coins on limited exchanges, which allows greater manipulation and only one or two venues for the victim to enter and exit. Lots of price movements up and down only a handful of exchanges is an indication of being a coordinated action rather than organic market behavior. Just remember, a dump, often also harms those who think they can profit from it. Second, the volume is a good indicator. The pump and dump artists have likely already accumulated a lot of coins and high volumes before price increase from nowhere is suspicious. Last, the buying price moving to a point to create FOMO for the masses. Hence, if we cannot understand why a coin is pumping, then it is best to stay away. Whale Walls The Whale Wall technique used to be frequently visible in prior Bitcoin cycles, somewhat less prevalent but still happens on shady exchanges. In the old economy, this technique used to be called order book spoofing. It is a tactic where a market participant will place a large set of orders with no intention of ever having executed, the intent is to create the illusion of large demand or supply in the market. Order book spoofing was used in the commodities market in the past and even reputable old economy institutions have been in trouble for executing these techniques. I have seen this in the crypto markets during the cycles of 2013 and 2017 when whales built up large buy and sell walls on the order books of exchanges, when I saw these back then I was inclined to react e.g., I saw 3000 BTC orders which triggered me to sell and it spoofed my analysis back in 2013 catching me off guard. Older, wiser, and more experienced from those days for sure! What probably occurred was a whale accumulating BTC secretly while markets were hitting sell orders, the sell wall suddenly vanishes as the whale pulls out his order after consummating the act. This can also happen with whales building buy walls to spoof analysis in the other direction making you think there is supported to hold up selling pressures. The triggering of bullish sentiment makes people assume long positions and then liquidation grenades explode. Whale walls and spoofed order books can create exponential profits for whales, as the same people take positions on futures markets too. They profit from volatility in a derivative market by manipulating price discovery in spot markets. This has become a lot easier to catch and mitigate against nowadays given all the data, exchange features, and alerts now available. The manipulation technique succeeded in driving Bitcoin down from approximately 32K to just under 30K. The newbies panicked and sold but, many of these bitcoins were picked up by smaller retail buyers and more accumulation on-chain continues. On July 19th, approximately 79,000 BTC was moved by a whale or whales to Coinbase to create a sell wall and induce a downward price. Normally, this quantity of BTC is bought and sold on OTC. However, the sell wall was executed out of no choice when prices did not fall to expected levels. Wash Trading Wash Trading is a variant of the whale wall technique and is used to create an illusion of an active market for a specific asset. Just like other tactics, it is illegal to do this in more mature, old economy financial markets but appears to be fair on the crypto turf as of now. Wash trading typically entails buying and selling the same asset simultaneously by one individual or a coordinated set of folks projecting a false volume. Most traders look at the volume and liquidity of an asset before they jump in and quickly discover liquidity false alarms when wash trading prevails. Propagators of wash trading can typically be traced to shady exchanges themselves, scamming crypto projects and people backing these projects. People have also leveraged technology e.g., developed bots to fake volumes and contaminate sites like Coinmarketcap, and will attract newbie investors and traders. I have made this point several times and have been very critical of Coinmarketcap about improving their accuracy, having said that the team there has been making improvements slowly but even if still not enough to take a leap of faith given they are not yet what they appear to be. Avoiding shady exchanges is a first step in staying clear of wash trading, periodically analyze order books of your exchange to see if there is any uniformity in buy and sell order patterns. Look at attributes like timestamps, matching pairs, order sizes and see if there is any symmetry brewing. On high liquidity exchanges, large bid-ask spreads should raise alarms. Nothing compares to doing your own research, analysis, community scans to derive your own conclusions. Remember to validate every crypto influencer touting coins or exchanges. Stop Hunting One of the most nefarious tactics deployed by crypto whales is stop hunting i.e., hunt for all the stop loss milestones visible. This is used to force action from market participants out of their positions by driving prices low enough to trigger their stops. The motivation for whales is to pick up the asset at a lower price once multiple participant’s hand is forced out. Most traders place their stops at key technical levels and absent other manipulation tactics, the levels usually signify key capitulation levels showing whales on what levels to target when pushing the market down. For example, if coin XYZ has stops positioned at a certain level ABC then many sell orders are executed to push the price to these stops, once it attains these key technical levels a myriad of automated sell orders are executed with whales scooping the bounty up with almost immediate market recovery with many others following the whale buys on the buy orders. Given crypto markets run 24X7, unaware traders wake up to discover their stops were hit and the prices are back up to where they last saw it, but all their positions are lost. Given that placing stops is still essential to managing risk if the market is legitimately moving down, it becomes tricky to spot this technique to avoid being ambushed by whale attacks. One way to do this is through a stop-limit order i.e., stock orders having an execution price above the trigger price making sure these orders will be placing a few points below the stop level. It provides a modest advantage to protect yourself from larger downside risks while leaving some room to ascertain a legitimate capitulation point. Many exchanges offer a wide variety of stop-limit orders e.g., conditional orders, cold orders, etc. and one should analyze if these work for individual needs to mitigate being stop hunted. FUD Fear, Uncertainty, and Doubt are one of the most effective manipulation techniques to move crypto asset prices without even buying or selling a coin. Newbie investors and day traders get shaken up with negative news and run for exit doors quickly. Traders dislike taking even small losses and hence if half-truths or fake narratives are created around a specific project or asset then you can see a large price impact i.e., sell the rumor and buy the news. False propaganda is used routinely with great effect by several hedge funds. It is very typical in many markets to push false information after obtaining a sizable position on it. The whole crypto space is filled with a lot of garbage content from crypto newbies, influencers, second and third-tier media, etc. making fake news harder to spot for the average retail investor. In such circumstances, mainstream media prevails, and people are compelled to digest the narratives from those sources. Mitigating against this tactic comes down to the individual by analyzing news and narratives more deeply and dispassionately. Data and facts to back up these claims are key to analyze, source i.e., known biases, trolls, etc. peddling the narrative is another filter, motives of people spreading the FUD and people behind the outlet should also be scrutinized. I am also not in the camp to dismiss everything as FUD; at times concerns, are legitimate and risk wrecking via a useless crypto project. While the multiple dozen China bans replayed by the media are clear items of FUD to totally dismiss, Tether not allowing a public audit is not to be ignored. Concluding Thoughts The crypto markets are relatively immature, and it shows in the ease of execution of some of these methods. This is the only asset class where one can get levels of transparency through blockchain visibility, immutability, and openness. As space has progressed over the last decade and continues to mature, the manipulation games and lawlessness of the wild west are diminishing. Most reputable crypto exchanges will not allow many of these tactics on their platforms and fill flag them. Although there is a designated crypto regulator, the CFTC and SEC do take notice occasionally and activate corrective actions. People however should know the manipulation arsenal and try to avoid them. I do hope this article added to your acumen. The biggest source of Crypto manipulation comes from the central banks. Coins that they can't track are a serious threat to their fiat theft mechanisms, however the market cap of the whole crypto space is less than $1trn which is peanuts to them. They don't want the masses being attracted to safe havens (including gold) whilst they are trashing their Fiat.
Nitin Kumar This article sheds light on the common manipulation techniques in the cryptocurrency world, it will educate readers on spotting common trends and analyzes the Bitcoin drop on July 20th, 2021. Introduction Market manipulation has existed for as long as tradable assets have existed and cryptocurrencies are no exceptions to it. The cryptocurrency space is also immature with nascent regulations making it vulnerable to market manipulations not easily possible with mature markets. In this article, I will dive into some of the common market manipulation techniques, identify patterns around it and equip people to spot abusive behavior. There are numerous forces at work every day in the crypto markets targeting price manipulation to spook newbie investors and inexperienced traders to panic and play right into the hands of these manipulators. Manipulation 101 The manipulation phenomenon is not exclusive to cryptocurrencies, these tactics have been outlawed by the SEC in mature markets where regulations are established. Stringent monitoring, reporting, and auditing requirements create risk for those who perpetrate them. Mature markets also have well-developed mechanisms to quickly identify and prosecute miscreants. This is far from where the crypto world is today, unregulated, anonymous and people with large holdings i.e., whales can act with impunity. While this might appear like a hot mess and a massive problem, it is not viewed that way by the proponents of the new economy. Cryptocurrencies are about financial freedom, free from the structures and barriers of the opaque old economy. This also means it is an opportunity for users to assume individual responsibility for their finances, hence they must manage the risks on their own. However, no one wants to be manipulated and one needs to get properly educated about strategic and tactical methods of manipulation to defend against them. Let us examine the commonly used manipulation techniques in cryptocurrency markets. Pump and Dump The most pervasive technique used in the crypto markets today is pump and dump, it also has one of the highest impacts.
yes
Cryptocurrency
Can Bitcoin and other cryptocurrencies be manipulated easily?
yes_statement
"bitcoin" and other "cryptocurrencies" can be "easily" "manipulated".. manipulating "bitcoin" and other "cryptocurrencies" is easy.
https://www.coinbase.com/learn/crypto-basics/what-is-cryptocurrency
What is cryptocurrency? | Coinbase
Learn all about Coinbase What is cryptocurrency? Bitcoin, Ethereum, and other crypto are revolutionizing how we invest, bank, and use money. Read this beginner’s guide to learn more. At its core, cryptocurrency is typically decentralized digital money designed to be used over the internet. Bitcoin, which launched in 2008, was the first cryptocurrency, and it remains by far the biggest, most influential, and best-known. In the decade since, Bitcoin and other cryptocurrencies like Ethereum have grown as digital alternatives to money issued by governments. The most popular cryptocurrencies, by market capitalization, are Bitcoin, Ethereum, Bitcoin Cash and Litecoin. Other well-known cryptocurrencies include Tezos, EOS, and ZCash. Some are similar to Bitcoin. Others are based on different technologies, or have new features that allow them to do more than transfer value. Crypto makes it possible to transfer value online without the need for a middleman like a bank or payment processor, allowing value to transfer globally, near-instantly, 24/7, for low fees. Cryptocurrencies are usually not issued or controlled by any government or other central authority. They’re managed by peer-to-peer networks of computers running free, open-source software. Generally, anyone who wants to participate is able to. If a bank or government isn’t involved, how is crypto secure? It’s secure because all transactions are vetted by a technology called a blockchain. A cryptocurrency blockchain is similar to a bank’s balance sheet or ledger. Each currency has its own blockchain, which is an ongoing, constantly re-verified record of every single transaction ever made using that currency. Unlike a bank’s ledger, a crypto blockchain is distributed across participants of the digital currency’s entire network No company, country, or third party is in control of it; and anyone can participate. A blockchain is a breakthrough technology only recently made possible through decades of computer science and mathematical innovations. Most importantly, cryptocurrencies allow individuals to take complete control over their assets Coinbase CEO Brian Armstrong's Vision for the Future of Cryptocurrency Key concepts Transferability Crypto makes transactions with people on the other side of the planet as seamless as paying with cash at your local grocery store. Privacy When paying with cryptocurrency, you don’t need to provide unnecessary personal information to the merchant. Which means your financial information is protected from being shared with third parties like banks, payment services, advertisers, and credit-rating agencies. And because no sensitive information needs to be sent over the internet, there is very little risk of your financial information being compromised, or your identity being stolen. Security Almost all cryptocurrencies, including Bitcoin, Ethereum, Tezos, and Bitcoin Cash are secured using technology called a blockchain, which is constantly checked and verified by a huge amount of computing power. Portability Because your cryptocurrency holdings aren’t tied to a financial institution or government, they are available to you no matter where you are in the world or what happens to any of the global finance system’s major intermediaries. Transparency Every transaction on the Bitcoin, Ethereum, Tezos, and Bitcoin Cash networks is published publicly, without exception. This means there's no room for manipulation of transactions, changing the money supply, or adjusting the rules mid-game. Irreversibility Unlike a credit card payment, cryptocurrency payments can’t be reversed. For merchants, this hugely reduces the likelihood of being defrauded. For customers, it has the potential to make commerce cheaper by eliminating one of the major arguments credit card companies make for their high processing fees. Safety The network powering Bitcoin has never been hacked. And the fundamental ideas behind cryptocurrencies help make them safe: the systems are permissionless and the core software is open-source, meaning countless computer scientists and cryptographers have been able to examine all aspects of the networks and their security. Why is cryptocurrency the future of finance? Cryptocurrencies are the first alternative to the traditional banking system, and have powerful advantages over previous payment methods and traditional classes of assets. Think of them as Money 2.0. -- a new kind of cash that is native to the internet, which gives it the potential to be the fastest, easiest, cheapest, safest, and most universal way to exchange value that the world has ever seen. Cryptocurrencies can be used to buy goods or services or held as part of an investment strategy, but they can’t be manipulated by any central authority, simply because there isn’t one. No matter what happens to a government, your cryptocurrency will remain secure. Digital currencies provide equality of opportunity, regardless of where you were born or where you live. As long as you have a smartphone or another internet-connected device, you have the same crypto access as everyone else. Cryptocurrencies create unique opportunities for expanding people’s economic freedom around the world. Digital currencies’ essential borderlessness facilitates free trade, even in countries with tight government controls over citizens’ finances. In places where inflation is a key problem, cryptocurrencies can provide an alternative to dysfunctional fiat currencies for savings and payments. As part of a broader investment strategy, crypto can be approached in a wide variety of ways. One approach is to buy and hold something like bitcoin, which has gone from virtually worthless in 2008 to thousands of dollars a coin today. Another would be a more active strategy, buying and selling cryptocurrencies that experience volatility. One option for crypto-curious investors looking to minimize risk is USD Coin, which is pegged 1:1 to the value of the U.S. dollar. It offers the benefits of crypto, including the ability to transfer money internationally quickly and cheaply, with the stability of a traditional currency. Coinbase customers that hold USDC earn rewards, making it an appealing alternative to a traditional savings account. Digital currencies provide equality of opportunity, regardless of where you were born or where you live. Why invest in cryptocurrency? Online exchanges like Coinbase have made buying and selling cryptocurrencies easy, secure, and rewarding. It only takes a few minutes to create a secure account, and you can buy cryptocurrency using your debit card or bank account. You can buy as little (or as much) crypto as you want, since you can buy fractional coins. For example, you can buy $25.00 worth of bitcoin. Many digital currencies, including USD Coin and Tezos, offer holders rewards just for having them. On Coinbase, you can earn 1% APY on— that’s much higher than most traditional savings accounts. You can also earn up to 5% APY when you stake Tezos on Coinbase. Learn more about Tezos staking rewards. Unlike stocks or bonds, you can easily transfer your cryptocurrency to anyone else or use it to pay for goods and services. Millions of people hold bitcoin and other digital currencies as part of their investment portfolios. What is a stablecoin? USD Coin is an example of a cryptocurrency called stablecoins. You can think of these as crypto dollars—they’re designed to minimize volatility and maximize utility. Stablecoins offer some of the best attributes of cryptocurrency (seamless global transactions, security, and privacy) with the valuation stability of fiat currencies. Stablecoins do this by pegging their value to an external factor, typically a fiat currency like the U.S. dollar or a commodity like gold. As a result, their valuations are less likely to shift dramatically from day to day. That stability can increase their utility for everyday use as money, because both buyers and merchants can be confident that the value of their transaction will remain relatively consistent over a longer timeframe. They can also work as a safe and stable way to save money, like a traditional savings account. Key question What is the future of cryptocurrency? Experts often talk about the ways crypto can provide solutions to the shortcomings of our current financial system. High fees, identity theft, and extreme economic inequality are an unfortunate part of our current financial system and they’re also things cryptocurrencies have the potential to address. The technology that powers digital currencies also has wide-ranging potential beyond the financial industry, from revolutionizing supply chains to building the new, decentralized internet. How does cryptocurrency work? Bitcoin is the first and most well-known, but there are thousands of types of cryptocurrencies. Many, like Litecoin and Bitcoin Cash, share Bitcoin’s core characteristics but explore new ways to process transactions. Others offer a wider range of features. Ethereum, for example, can be used to run applications and create contracts. All four, however, are based on an idea called the blockchain, which is key to understanding how cryptocurrency works. At its most basic, a blockchain is a list of transactions that anyone can view and verify. The Bitcoin blockchain, for example, is a record of every time someone sends or receives bitcoin. This list of transactions is fundamental for most cryptocurrencies because it enables secure payments to be made between people who don’t know each other without having to go through a third-party verifier like a bank. Blockchain technology is also exciting because it has many uses beyond cryptocurrency. Blockchains are being used to explore medical research, improve the sharing of healthcare records, streamline supply chains, increase privacy on the internet, and so much more. The principles behind both bitcoin and the Bitcoin blockchain first appeared online in a white-paper published in late 2007 by a person or group going by the name Satoshi Nakamoto. The blockchain ledger is split across all the computers on the network, which are constantly verifying that the blockchain is accurate.This means there is no central vault, entity, or database that can be hacked, stolen, or manipulated. Key concept Cryptocurrencies use a technology called public-private key cryptography to transfer coin ownership on a secure and distributed ledger. A private key is an ultra secure password that never needs to be shared with anyone, with which you can send value on the network. An associated public key can be freely and safely shared with others to receive value on the network. From the public key, it is impossible for anyone to guess your private key. What is cryptocurrency mining? Most cryptocurrencies are ‘mined’ via a decentralized (also known as peer-to-peer) network of computers. But mining doesn’t just generate more bitcoin or Ethereum - it ’s also the mechanism that updates and secures the network by constantly verifying the public blockchain ledger and adding new transactions. Technically, anyone with a computer and an internet connection can become a miner. But before you get excited, it’s worth noting that mining is not always profitable. Depending on which cryptocurrency you’re mining, how fast your computer is, and the cost of electricity in your area, you may end up spending more on mining than you earn back in cryptocurrency. As a result, most crypto mining these days is done by companies that specialize in it, or by large groups of individuals who all contribute their computing power. How does the network encourage miners to participate in maintaining the blockchain? Again, taking Bitcoin as an example, the network holds a lottery in which all the mining rigs around the world race to become the first to solve a math problem, which also verifies and updates the blockchain with new transactions. Each winner is awarded new bitcoin, which can then make its way into the broader marketplace. Key question Where do cryptocurrencies get their value? The economic value of cryptocurrency, like all goods and services, comes from supply and demand. Supply refers to how much is available—like how many bitcoin are available to buy at any moment in time. Demand refers to people’s desire to own it—as in how many people want to buy bitcoin and how strongly they want it. The value of a cryptocurrency will always be a balance of both factors. There are also other types of value. For example, there’s the value you get from using a cryptocurrency. Many people enjoy spending or gifting crypto, meaning that it gives them a sense of pride to support an exciting new financial system. Similarly, some people like to shop with bitcoin because they like its low fees and want to encourage businesses to accept it. How to buy bitcoin and other cryptocurrency The easiest way to acquire cryptocurrency is to purchase on an online exchange like Coinbase. One good approach is to ask yourself what you’re hoping to do with crypto and choose the currency that will help you achieve your goals. For example, if you want to buy a laptop with crypto, bitcoin might be a good option because it is the most widely accepted cryptocurrency. On the other hand, if you want to play a digital card game, then Ethereum is a popular choice. Keep in mind that you do not need to buy a whole coin. On Coinbase, you can buy portions of coins in increments as little as 2 dollars, euros, pounds, or your local currency. How do you store cryptocurrency? Storing crypto is similar to storing cash, which means you need to protect it from theft and loss. There are many ways to store crypto both online and off, but the simplest solution is via a trusted, secure exchange like Coinbase. Coinbase customers cansecurelystore, send, receive, and convert crypto by signing into their account on a computer, tablet, or phone. Want to transfer money from your wallet to a bank account? The Coinbase app makes it as easy as transferring funds from one bank to another. (Much like conventional bank transfers or ATM withdrawals, exchanges like Coinbase set a daily limit, and it might take from a few days to a week for the transaction to be completed. What can you do with cryptocurrency? There’s a wide range of things you can do with cryptocurrency, and the list grows with time. Here are a few ways to get started, from participating in everyday activities to exploring new technological frontiers: Gift it: Cryptocurrency makes a great gift for friends and family who are interested in learning about new technology. Tip someone: Authors, musicians, and other online content creators sometimes leave Bitcoin addresses or QR codes at the end of their articles. If you like their work, you can give a little crypto as a way of saying thanks. Explore unique new combinations of money and technology: Orchid is a VPN, which helps protect you when you’re online, and a digital currency at the same time. Basically it’s broken down into two parts, the Orchid VPN app and the OXT cryptocurrency, and it all runs on the Ethereum network. Intrigued? Read more here. Travel the world: Because cryptocurrency isn’t tied to a specific country, traveling with crypto can cut down on money exchange fees. There’s already a small but thriving community of self-titled “crypto nomads” who primarily, or in some cases exclusively, spend crypto when they travel. Buy property in a virtual gaming world: Decentraland, which also runs on the Ethereum blockchain, is the first virtual world entirely owned by its users. Users can buy and sell land, avatar clothing, and all kinds of other stuff while partying in virtual nightclubs or mingling in virtual art galleries. Explore decentralized finance, or DeFi: A wide variety of new players are aiming to recreate the entire global financial system, from mutual-fund-like investments to loan-lending mechanisms and way beyond, without any central authorities.
This means there's no room for manipulation of transactions, changing the money supply, or adjusting the rules mid-game. Irreversibility Unlike a credit card payment, cryptocurrency payments can’t be reversed. For merchants, this hugely reduces the likelihood of being defrauded. For customers, it has the potential to make commerce cheaper by eliminating one of the major arguments credit card companies make for their high processing fees. Safety The network powering Bitcoin has never been hacked. And the fundamental ideas behind cryptocurrencies help make them safe: the systems are permissionless and the core software is open-source, meaning countless computer scientists and cryptographers have been able to examine all aspects of the networks and their security. Why is cryptocurrency the future of finance? Cryptocurrencies are the first alternative to the traditional banking system, and have powerful advantages over previous payment methods and traditional classes of assets. Think of them as Money 2.0. -- a new kind of cash that is native to the internet, which gives it the potential to be the fastest, easiest, cheapest, safest, and most universal way to exchange value that the world has ever seen. Cryptocurrencies can be used to buy goods or services or held as part of an investment strategy, but they can’t be manipulated by any central authority, simply because there isn’t one. No matter what happens to a government, your cryptocurrency will remain secure. Digital currencies provide equality of opportunity, regardless of where you were born or where you live. As long as you have a smartphone or another internet-connected device, you have the same crypto access as everyone else. Cryptocurrencies create unique opportunities for expanding people’s economic freedom around the world. Digital currencies’ essential borderlessness facilitates free trade, even in countries with tight government controls over citizens’ finances. In places where inflation is a key problem, cryptocurrencies can provide an alternative to dysfunctional fiat currencies for savings and payments.
no
Cryptocurrency
Can Bitcoin and other cryptocurrencies be manipulated easily?
no_statement
"bitcoin" and other "cryptocurrencies" cannot be "easily" "manipulated".. it is not easy to "manipulate" "bitcoin" and other "cryptocurrencies".
https://www.kitco.com/news/2023-02-16/Crypto-can-be-more-risky-than-penny-stocks-is-easily-manipulated-Ronald-AngSiy.html
Crypto can be 'more risky' than penny stocks, is easily manipulated ...
News Bites (Kitco News) - Aside from Bitcoin, Ether, and a few other cryptocurrencies with a high trading volume, investing in crypto is "more risky" than buying penny stocks, according to Ronald AngSiy, COO of CEO.ca, who previously worked as a blockchain expert. "Any individual can manipulate crypto prices today because of how small the market cap is," he observed. "Once you get to the smaller cryptocurrencies, that's where it's significantly more risky than a penny stock for investors." He highlighted that Tesla stock has a market cap of $650 billion, while the entire crypto market only has a market cap of $1 trillion, making crypto "easy" to manipulate. "If you look at the price of a Binance token, versus the price of a Tesla stock, you'll see how small the crypto market is," he suggested. "It's hard, in such a small market, to say how what's happening in the world will directly affect prices today." AngSiy suggested that there is "more pain ahead for crypto investors" in 2023, as the Federal Reserve continues to hike interest rates. "How that pain is manifested is hard to exactly say, because the crypto market is so small that you could have one or two front-end entities manipulate the price of whole crypto market," he said. He observed that since Bitcoin came into being in 2008, the crypto market has only experienced a "low interest-rate world," with the Effective Fed Funds Rate (EFFR) never rising beyond 2.5 percent until 2022. Currently, the EFFR is around 4.5 percent, and Fed Chairman Jerome Powell has stated that further hikes are expected until the Fed reaches a terminal rate of around 5 percent in 2023. "We've never seen it [crypto] in a high and growing interest-rate world, which is what is happening right now," he said. "When you look at low interest-rate environments, it's easier for investors to borrow money, and then to take that money and then allocate a portion of it to crypto… now you're seeing money being pulled back from risk assets [like crypto] and either allocated to safer assets, or you're seeing risk assets being margin called." To find out AngSiy's outlook for Bitcoin and Ether, watch the video above. Disclaimer: The views expressed in this article are those of the author and may not reflect those of Kitco Metals Inc. The author has made every effort to ensure accuracy of information provided; however, neither Kitco Metals Inc. nor the author can guarantee such accuracy. This article is strictly for informational purposes only. It is not a solicitation to make any exchange in commodities, securities or other financial instruments. Kitco Metals Inc. and the author of this article do not accept culpability for losses and/ or damages arising from the use of this publication.
News Bites (Kitco News) - Aside from Bitcoin, Ether, and a few other cryptocurrencies with a high trading volume, investing in crypto is "more risky" than buying penny stocks, according to Ronald AngSiy, COO of CEO.ca, who previously worked as a blockchain expert. "Any individual can manipulate crypto prices today because of how small the market cap is," he observed. "Once you get to the smaller cryptocurrencies, that's where it's significantly more risky than a penny stock for investors. " He highlighted that Tesla stock has a market cap of $650 billion, while the entire crypto market only has a market cap of $1 trillion, making crypto "easy" to manipulate. "If you look at the price of a Binance token, versus the price of a Tesla stock, you'll see how small the crypto market is," he suggested. "It's hard, in such a small market, to say how what's happening in the world will directly affect prices today. " AngSiy suggested that there is "more pain ahead for crypto investors" in 2023, as the Federal Reserve continues to hike interest rates. "How that pain is manifested is hard to exactly say, because the crypto market is so small that you could have one or two front-end entities manipulate the price of whole crypto market," he said. He observed that since Bitcoin came into being in 2008, the crypto market has only experienced a "low interest-rate world," with the Effective Fed Funds Rate (EFFR) never rising beyond 2.5 percent until 2022. Currently, the EFFR is around 4.5 percent, and Fed Chairman Jerome Powell has stated that further hikes are expected until the Fed reaches a terminal rate of around 5 percent in 2023. "We've never seen it [crypto] in a high and growing interest-rate world, which is what is happening right now," he said.
yes
Cryptocurrency
Can Bitcoin and other cryptocurrencies be manipulated easily?
no_statement
"bitcoin" and other "cryptocurrencies" cannot be "easily" "manipulated".. it is not easy to "manipulate" "bitcoin" and other "cryptocurrencies".
https://www.sofi.com/learn/content/double-spending/
What Is the Double Spending Problem with Bitcoin? | SoFi
You are now leaving the SoFi website and entering a third-party website. SoFi has no control over the content, products or services offered nor the security or privacy of information transmitted to others via their website. We recommend that you review the privacy policy of the site you are entering. SoFi does not guarantee or endorse the products, information or recommendations provided in any third party website. We’re here to help! First and foremost, SoFi Learn strives to be a beneficial resource to you as you navigate your financial journey. Read more We develop content that covers a variety of financial topics. Sometimes, that content may include information about products, features, or services that SoFi does not provide. We aim to break down complicated concepts, loop you in on the latest trends, and keep you up-to-date on the stuff you can use to help get your money right. Read less As new forms of technology and money become publicly available, bad actors are often some of the earliest adopters because the asset is largely untested or unregulated and thus more easily manipulated. Bitcoin is no exception. Bitcoin’s completely digital currency network is decentralized—it has no central authority, regulators, or governing bodies to police thieves and hackers. Though traditional security entities don’t monitor the Bitcoin network for double-spending, other network defenses have been implemented to combat attacks that would otherwise threaten the network’s consensus mechanism and ledger of transactions, providing confidence to those who invest in Bitcoin. What Is the Double-Spending Problem? The double spending problem is a phenomenon in which a single unit of currency is spent simultaneously more than once. This creates a disparity between the spending record and the amount of that currency available. Imagine, for example, if someone walks into a clothing store with only $10 and buys a $10 shirt, then buys another $10 shirt with the same $10 already paid to the cashier. While this is difficult to do with physical money—in part because recent transactions and current owners can be easily verified in real-time—there’s more opportunity to do it with digital currency. Double spending is most commonly associated with Bitcoin because digital information can be manipulated or reproduced more easily by skilled programmers familiar with how the blockchain protocol works. Bitcoin is also a target for thieves to double-spend because Bitcoin is a peer-to-peer medium of exchange that doesn’t pass through any intermediaries or institutions. How Does Double-Spending Bitcoin Work? Fundamentally, a Bitcoin double spend consists of a bad actor sending a copy of one transaction to make the copy appear legitimate while retaining the original, or erasing the first transaction altogether. This is possible—and dangerous—for Bitcoin or any digital currency because digital information is more easily duplicated. There are a few different ways criminals attempt to double-spend Bitcoin. Simultaneously Sending the Same Bitcoin Amount Twice (or More) In this situation, an attacker will simultaneously send the same bitcoin to two (or more) different addresses. This type of attack attempts to exploit the Bitcoin network’s slow 10-minute block time, in which transactions are sent to the network and queued to be confirmed and verified by miners to be added to the blockchain. In sneaking an extra transaction onto the blockchain, thieves can give the illusion that the original bitcoin amount hasn’t been spent already, or manipulate the existing blockchain and laboriously re-mine blocks with fake transaction histories to support the desired future double spend. Reverse an already-sent transaction Another way to attempt a Bitcoin double-spend is by reversing a transaction after receiving the counterparty’s assets or services, thus keeping both the received goods and the sent bitcoin. The attacker sends multiple packets (units of data) to the network to reverse the transactions, to give the illusion they never happened. Blockchain Concerns with Double Spending Some methods employed by hackers to circumvent the Bitcoin verification process consist of out-computing the blockchain security mechanism or double-spending by sending a fake transaction log to a seller and a different log to the network. Perhaps the greatest risk for double-spending Bitcoin is a 51% attack, a network disruption where a user (or users) control more than 50% of the computing power that maintains the blockchain’s distributed ledger of transactions. If a bad actor gains majority control of the blockchain, they can modify the network’s ledger to transfer bitcoin to their digital wallet multiple times as if the original transactions had not yet previously occurred. Another concern is the potential double-spending problem on decentralized exchanges as crypto continues to migrate to decentralized exchanges (DEX) and platforms. With no central authority or intermediary, the growth and adoption of DEXs will depend on their security and proven ability to prevent double-spending. Despite a variety of attempts to successfully double spend Bitcoin, the majority of bitcoin thefts have not been the result of double-counting or double-spend attacks but rather users not properly securing their bitcoin. How Does Bitcoin Prevent Double Spending? Bitcoin’s network prevents double-spending by combining complementary security features of the blockchain network and its decentralized network of miners to verify transactions before they are added to the blockchain. Here’s an example of that security in action: Person A and Person B go to a store with only one collective BTC to spend. Person A buys a TV costing exactly 1 BTC. Person B buys a motorcycle that also costs exactly one BTC. Both transactions go into a pool of unconfirmed transactions, but only the first transaction gets confirmations (blocks containing transactions from preceding blocks and new transactions) and is verified by miners in the next block. The second transaction gets pulled from the network because it didn’t get enough confirmations after the miners determined it was invalid. Security measure 1: Whichever transaction gets the maximum number of network confirmations (typically a minimum of six) will be included in the blockchain, while others are discarded Security measure 2: Once confirmations and transactions are put on the blockchain they are time-stamped, rendering them irreversible and impossible to alter Once a merchant receives the minimum number of block confirmations, they can be sure a transaction was valid and not a double spend. Bitcoin’s proof-of-work consensus model is inherently resistant to double-spending because of its block time. Proof-of-work requires miners on the network, or validator nodes, to solve complex algorithms that require a significant amount of computing power, or “hash power.” This process makes any attempt to duplicate or falsify the blockchain significantly more difficult to execute, because the attacker would have to go back and re-mine every single block with the new fraudulent transaction(s) on it. This process compounds over time, preserving previous transactions while recording new transactions. Reaching consensus through proof-of-work mining provides the network accountability by verifying Bitcoin ownership in each transaction and preventing double-counting and other subtle forms of fraud. While it is technically possible for a group of individuals to initiate a 51% attack on the Bitcoin network, combining mining power and disrupting the network for their benefit, it is unlikely and difficult as it would require collusion by a tremendous amount of miners or a single miner with over 50% of the network’s hash power. Successfully executing a 51% attack has only gotten more difficult over time, for a few reasons: the difficulty of mining Bitcoin increases with every Bitcoin halving; mining hardware is prohibitively expensive at that scale; and a massive amount of electricity would be required to power such a massive mining operation. The Takeaway Double spending of Bitcoin is a concern, since it’s a digital currency with no central authority to verify its spending records. This leaves some to question the network’s security and legitimacy of Bitcoin’s network, validators, and monetary supply. However, the network’s distributed ledger of transactions, the blockchain, autonomously records and verifies each transaction’s authenticity and prevents double counting. Though the blockchain can’t solely prevent double-spending, it is a line of self-defense before an army of decentralized validator nodes solve complex mathematical problems to confirm and verify new transactions are not double spent before they’re permanently added to the network’s permanent ledger. Cryptocurrencies like Bitcoin can be volatile investments and prices change quickly due to news flow and other factors. Yet it’s that potential for highly fluctuating price changes that compels some people to seek out crypto as an investment. With SoFi Invest® cryptocurrency trading, people of all experience levels can invest in cryptocurrencies like Bitcoin within a traditional investing platform, safely maintaining crypto alongside an investor’s stocks, bonds, and other assets. Find out how to invest in crypto with SoFi Invest. Crypto: Bitcoin and other cryptocurrencies aren’t endorsed or guaranteed by any government, are volatile, and involve a high degree of risk. Consumer protection and securities laws don’t regulate cryptocurrencies to the same degree as traditional brokerage and investment products. Research and knowledge are essential prerequisites before engaging with any cryptocurrency. US regulators, including FINRA , the SEC , and the CFPB , have issued public advisories concerning digital asset risk. Cryptocurrency purchases should not be made with funds drawn from financial products including student loans, personal loans, mortgage refinancing, savings, retirement funds or traditional investments. Limitations apply to trading certain crypto assets and may not be available to residents of all states. Third-Party Brand Mentions: No brands, products, or companies mentioned are affiliated with SoFi, nor do they endorse or sponsor this article. Third-party trademarks referenced herein are property of their respective owners. SoFi Invest® The information provided is not meant to provide investment or financial advice. Also, past performance is no guarantee of future results. Investment decisions should be based on an individual’s specific financial needs, goals, and risk profile. SoFi can’t guarantee future financial performance. Advisory services offered through SoFi Wealth, LLC. SoFi Securities, LLC, member FINRA / SIPC . SoFi Invest refers to the three investment and trading platforms operated by Social Finance, Inc. and its affiliates (described below). Individual customer accounts may be subject to the terms applicable to one or more of the platforms below. 1) Automated Investing—The Automated Investing platform is owned by SoFi Wealth LLC, an SEC registered investment advisor (“Sofi Wealth“). Brokerage services are provided to SoFi Wealth LLC by SoFi Securities LLC, an affiliated SEC registered broker dealer and member FINRA/SIPC, (“Sofi Securities). 2) Active Investing—The Active Investing platform is owned by SoFi Securities LLC. Clearing and custody of all securities are provided by APEX Clearing Corporation. 3) Cryptocurrency is offered by SoFi Digital Assets, LLC, a FinCEN registered Money Service Business. For additional disclosures related to the SoFi Invest platforms described above, including state licensure of Sofi Digital Assets, LLC, please visit www.sofi.com/legal. Neither the Investment Advisor Representatives of SoFi Wealth, nor the Registered Representatives of SoFi Securities are compensated for the sale of any product or service sold through any SoFi Invest platform. Information related to lending products contained herein should not be construed as an offer or prequalification for any loan product offered by SoFi Bank, N.A. 2Terms and conditions apply. Earn a bonus (as described below) when you open a new SoFi Digital Assets LLC account and buy at least $50 worth of any cryptocurrency within 7 days. The offer only applies to new crypto accounts, is limited to one per person, and expires on December 31, 2023. Once conditions are met and the account is opened, you will receive your bonus within 7 days. SoFi reserves the right to change or terminate the offer at any time without notice. Terms and conditions apply. SOFI RESERVES THE RIGHT TO MODIFY OR DISCONTINUE PRODUCTS AND BENEFITS AT ANY TIME WITHOUT NOTICE. To qualify, a borrower must be a U.S. citizen or other eligible status, be residing in the U.S., and meet SoFi's underwriting requirements. Not all borrowers receive the lowest rate. Lowest rates reserved for the most creditworthy borrowers. If approved, your actual rate will be within the range of rates at the time of application and will depend on a variety of factors, including term of loan, evaluation of your creditworthiness, income, and other factors. If SoFi is unable to offer you a loan but matches you for a loan with a participating bank, then your rate may be outside the range of rates listed above. Rates and Terms are subject to change at any time without notice. SoFi Personal Loans can be used for any lawful personal, family, or household purposes and may not be used for post-secondary education expenses. Minimum loan amount is $5,000. The average of SoFi Personal Loans funded in 2022 was around $30K. Information current as of 6/2/23. SoFi Personal Loans originated by SoFi Bank, N.A. Member FDIC. NMLS #696891 (www.nmlsconsumeraccess.org). See SoFi.com/legal for state-specific license details. See SoFi.com/eligibility for details and state restrictions. ✝ To check the rates and terms you qualify for, SoFi conducts a soft credit pull that will not affect your credit score. However, if you choose a product and continue your application, we will request your full credit report from one or more consumer reporting agencies, which is considered a hard credit pull and may affect your credit.
The Takeaway Double spending of Bitcoin is a concern, since it’s a digital currency with no central authority to verify its spending records. This leaves some to question the network’s security and legitimacy of Bitcoin’s network, validators, and monetary supply. However, the network’s distributed ledger of transactions, the blockchain, autonomously records and verifies each transaction’s authenticity and prevents double counting. Though the blockchain can’t solely prevent double-spending, it is a line of self-defense before an army of decentralized validator nodes solve complex mathematical problems to confirm and verify new transactions are not double spent before they’re permanently added to the network’s permanent ledger. Cryptocurrencies like Bitcoin can be volatile investments and prices change quickly due to news flow and other factors. Yet it’s that potential for highly fluctuating price changes that compels some people to seek out crypto as an investment. With SoFi Invest® cryptocurrency trading, people of all experience levels can invest in cryptocurrencies like Bitcoin within a traditional investing platform, safely maintaining crypto alongside an investor’s stocks, bonds, and other assets. Find out how to invest in crypto with SoFi Invest. Crypto: Bitcoin and other cryptocurrencies aren’t endorsed or guaranteed by any government, are volatile, and involve a high degree of risk. Consumer protection and securities laws don’t regulate cryptocurrencies to the same degree as traditional brokerage and investment products. Research and knowledge are essential prerequisites before engaging with any cryptocurrency. US regulators, including FINRA , the SEC , and the CFPB , have issued public advisories concerning digital asset risk. Cryptocurrency purchases should not be made with funds drawn from financial products including student loans, personal loans, mortgage refinancing, savings, retirement funds or traditional investments. Limitations apply to trading certain crypto assets and may not be available to residents of all states.
no
Cryptocurrency
Can Bitcoin and other cryptocurrencies be manipulated easily?
no_statement
"bitcoin" and other "cryptocurrencies" cannot be "easily" "manipulated".. it is not easy to "manipulate" "bitcoin" and other "cryptocurrencies".
https://www.gisreportsonline.com/r/war-on-cash/
Central bank digital currencies and the war on cash
Central bank digital currencies and the war on cash Governments and banks love digital money because it allows them to track consumer behavior. Their plans to introduce CBDCs amount to a war on cash and cryptocurrencies, over which they have much less control. In a nutshell They see cryptocurrencies and cash as a threat because they cannot be tracked Digital money has been with us for a long time. There is a wide range of various financial operations – like when we pay with a credit or debit card – in which no transfer of physical money takes place. Instead, computers and the internet do the job. People and businesses like digital money since it is easy to carry and handle in large amounts and can be easily converted into physical money at a fixed rate: 10 euros on a Visa card is equal to a 10-euro banknote. In fact, today’s digital and physical money only differ in how they are carried and transferred. The flipside of the coin is that the use of digital money ultimately depends on the rules of the game imposed by those who create it: a commercial bank or a credit card company. Moreover, and in contrast with physical money, digital transactions are not anonymous, since the issuer knows exactly how and when you use them. Cryptocurrencies such as bitcoin offer a different kind of service. They are virtual currencies, since they are monetary units in their own right, with their own denominations, and they are convertible into paper money at a variable rate determined by supply and demand. They are supplied according to an algorithm and are decentralized. No central agency monitors what people are doing with their crypto money. Monitoring transactions Understandably, governments and bankers love digital money and do their best to encourage its use. Governments believe that cash plays a key role in criminal activities and makes tax evasion and tax avoidance easier. Promoting and tracking digital transactions amounts to a war on cash. More generally, governmental authorities like to monitor the population, possibly to fine-tune economic policymaking. Digital money is a powerful tool in that respect. Central bankers like the idea of a digital currency, but are eager to kill decentralized cryptocurrencies. Commercial banks are eager to promote digital transactions for several reasons. Digital money reduces transaction costs (staff and facilities can be replaced by powerful, centralized computers), allows banks to charge fees that do not apply when people resort to cash, and provides data to better assess individuals’ situations and habits when they apply for consumer credit. Today’s central bankers claim they pursue financial stability: they regulate the world of banking, they control (manipulate) the money supply and they cooperate with the government authorities. They now ensure that governments do not default on their public debt. This is what Western central bankers have been doing for over a decade, in the footsteps of the Bank of Japan and of other monetary institutions. From their viewpoint, the war on cash is certainly good news. By contrast, whatever smacks of decentralization or weaker discretionary power is considered a threat. This explains why central bankers like the idea of a digital currency in general, are toying with the possibility of introducing their own digital currencies and are eager to kill decentralized cryptocurrencies. It should be clear, however, that a central bank digital currency (CBDC) does not differ much from today’s paper money and is certainly not a new cryptocurrency. Will central bankers win the day? And if yes, what kind of CBDC should we expect? The different scenarios depend on what central bankers hope to achieve and on whether they are technically able to obtain. Nobody has clear answers in that respect, as witnessed by the rather vague statements circulated by top officials in the eurozone, the United Kingdom and the United States. So far, only a handful of countries have made a firm commitment to issuing a CBDC or has implemented it. China is the most notable of these, and has had mixed results. Violation of privacy Each major central bank will probably issue its own digital currency eventually, which monetary technocrats consider an additional policy tool at their disposal even if today they do not seem to know how to use it. Much depends on the target central bankers are pursuing and on their strategy to introduce a given CBDC. The Chinese experience shows that people are happy with their current digital money and do not see why they should make use of a CBDC, which promises to provide about the same services, but ends up violating one’s privacy. Hence, spontaneous adoption is unlikely. One could argue that a central bank is less likely to fail than a commercial bank. However, this statement is relevant only if the central authorities renege on their guarantees of banks’ deposits, which is improbable. Individuals could decide to have an electronic account with the central bank as a form of investment but would probably hesitate to put a large share of their liquid assets there, especially if the return is zero or even negative. A CBDC would be all but useless if it merely widened the options available to the public. Could CBDC plausibly exist along with other means of payment (physical cash, traditional digital money, cryptocurrencies)? Certainly, central banks would be no match for MasterCard, American Express and other providers. Running a digital instrument of daily payments requires entrepreneurial and managerial skills that central bankers do not have. Moreover, they would face distorted incentives and give in to regulatory temptations. For example, they could cover losses by resorting to printing money, or create demand for their CBDC by requiring that people carry out given sets of transactions (like payments to or from the public sector) in CBDC. Such a CBDC would likely not be a substitute for other means of payment, but rather for low-risk forms of investment, like bonds guaranteed by a central bank or by a credible government authority (like German bunds or U.S. treasuries). A CBDC would be all but useless if it merely widened the payment options available to the public; and could reduce the demand for government bonds, which would probably require further monetary intervention to avoid public-finance crises and soaring interest rates. Yet, it could charm eurozone policymakers if their CBDC became the mandatory means of payment within the realms of taxation and at least some areas of public expenditure. If so, Brussels (and Frankfurt) could perhaps brag that governments no longer have their own resources, but only those channeled through – and approved and monitored by – the central monetary authorities. Digital currencies are with us because they are useful, but they will never totally replace physical money. A second scenario would see the CBDC replace all other means of payment, an option that would require the end of convertibility to prevent residents from moving their funds to other currencies. This option would appeal to those who believe that monitoring and possibly regulating individuals’ spending habits dominate all other concerns. The cost would be high: the domestic price structure would be isolated from the world, with dramatic consequences for efficiency and trade. Monetary manipulation Digital currencies are with us because they are useful, but they will never totally replace physical money, which people consider a guarantee against technical problems (temporary blackouts in the electronic systems) and a partial defense against fiscal and regulatory abuse. Authorities could take their war on physical cash to extremes and plainly outlaw this means of payment. Doing so would not require a CBDC, nor is a CBDC the solution to those who embrace cryptocurrencies to protect their privacy and believe that the benefits of their nondiscretionary supply rules offset the cost associated with risk and volatility. The best way of making sure that cryptocurrencies do not help tax evaders and avoiders is to stop manipulating traditional currencies and abusing taxpayers. Regrettably, monetary manipulation is likely to continue for years to come. The heir to the now-disgraced modern monetary theory is not sound money, but price regulation, higher taxation and more public indebtedness. The war on cash and crypto is eyewash and the introduction of CBDCs has little to do with money. Rather, it is an attempt to test our love of liberty and our tolerance for soft forms of totalitarianism.
Central bank digital currencies and the war on cash Governments and banks love digital money because it allows them to track consumer behavior. Their plans to introduce CBDCs amount to a war on cash and cryptocurrencies, over which they have much less control. In a nutshell They see cryptocurrencies and cash as a threat because they cannot be tracked Digital money has been with us for a long time. There is a wide range of various financial operations – like when we pay with a credit or debit card – in which no transfer of physical money takes place. Instead, computers and the internet do the job. People and businesses like digital money since it is easy to carry and handle in large amounts and can be easily converted into physical money at a fixed rate: 10 euros on a Visa card is equal to a 10-euro banknote. In fact, today’s digital and physical money only differ in how they are carried and transferred. The flipside of the coin is that the use of digital money ultimately depends on the rules of the game imposed by those who create it: a commercial bank or a credit card company. Moreover, and in contrast with physical money, digital transactions are not anonymous, since the issuer knows exactly how and when you use them. Cryptocurrencies such as bitcoin offer a different kind of service. They are virtual currencies, since they are monetary units in their own right, with their own denominations, and they are convertible into paper money at a variable rate determined by supply and demand. They are supplied according to an algorithm and are decentralized. No central agency monitors what people are doing with their crypto money. Monitoring transactions Understandably, governments and bankers love digital money and do their best to encourage its use. Governments believe that cash plays a key role in criminal activities and makes tax evasion and tax avoidance easier. Promoting and tracking digital transactions amounts to a war on cash. More generally, governmental authorities like to monitor the population, possibly to fine-tune economic policymaking. Digital money is a powerful tool in that respect. Central bankers like the idea of a digital currency, but are eager to kill decentralized cryptocurrencies. Commercial banks are eager to promote digital transactions for several reasons.
no
Cryptocurrency
Can Bitcoin and other cryptocurrencies be manipulated easily?
no_statement
"bitcoin" and other "cryptocurrencies" cannot be "easily" "manipulated".. it is not easy to "manipulate" "bitcoin" and other "cryptocurrencies".
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8434614/
Blockchain for Electronic Voting System—Review and Open ...
Share RESOURCES As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health. Learn more: PMC Disclaimer | PMC Copyright Notice Associated Data Abstract Online voting is a trend that is gaining momentum in modern society. It has great potential to decrease organizational costs and increase voter turnout. It eliminates the need to print ballot papers or open polling stations—voters can vote from wherever there is an Internet connection. Despite these benefits, online voting solutions are viewed with a great deal of caution because they introduce new threats. A single vulnerability can lead to large-scale manipulations of votes. Electronic voting systems must be legitimate, accurate, safe, and convenient when used for elections. Nonetheless, adoption may be limited by potential problems associated with electronic voting systems. Blockchain technology came into the ground to overcome these issues and offers decentralized nodes for electronic voting and is used to produce electronic voting systems mainly because of their end-to-end verification advantages. This technology is a beautiful replacement for traditional electronic voting solutions with distributed, non-repudiation, and security protection characteristics. The following article gives an overview of electronic voting systems based on blockchain technology. The main goal of this analysis was to examine the current status of blockchain-based voting research and online voting systems and any related difficulties to predict future developments. This study provides a conceptual description of the intended blockchain-based electronic voting application and an introduction to the fundamental structure and characteristics of the blockchain in connection to electronic voting. As a consequence of this study, it was discovered that blockchain systems may help solve some of the issues that now plague election systems. On the other hand, the most often mentioned issues in blockchain applications are privacy protection and transaction speed. For a sustainable blockchain-based electronic voting system, the security of remote participation must be viable, and for scalability, transaction speed must be addressed. Due to these concerns, it was determined that the existing frameworks need to be improved to be utilized in voting systems. 1. Introduction Electoral integrity is essential not just for democratic nations but also for state voter’s trust and liability. Political voting methods are crucial in this respect. From a government standpoint, electronic voting technologies can boost voter participation and confidence and rekindle interest in the voting system. As an effective means of making democratic decisions, elections have long been a social concern. As the number of votes cast in real life increases, citizens are becoming more aware of the significance of the electoral system [1,2]. The voting system is the method through which judges judge who will represent in political and corporate governance. Democracy is a system of voters to elect representatives by voting [3,4]. The efficacy of such a procedure is determined mainly by the level of faith that people have in the election process. The creation of legislative institutions to represent the desire of the people is a well-known tendency. Such political bodies differ from student unions to constituencies. Over the years, the vote has become the primary resource to express the will of the citizens by selecting from the choices they made [2]. The traditional or paper-based polling method served to increase people’s confidence in the selection by majority voting. It has helped make the democratic process and the electoral system worthwhile for electing constituencies and governments more democratized. There are 167 nations with democracy in 2018, out of approximately 200, which are either wholly flawed or hybrid [5,6]. The secret voting model has been used to enhance trust in democratic systems since the beginning of the voting system. It is essential to ensure that assurance in voting does not diminish. A recent study revealed that the traditional voting process was not wholly hygienic, posing several questions, including fairness, equality, and people’s will, was not adequately [7] quantified and understood in the form of government [2,8]. Engineers across the globe have created new voting techniques that offer some anti-corruption protection while still ensuring that the voting process should be correct. Technology introduced the new electronic voting techniques and methods [9], which are essential and have posed significant challenges to the democratic system. Electronic voting increases election reliability when compared to manual polling. In contrast to the conventional voting method, it has enhanced both the efficiency and the integrity of the process [10]. Because of its flexibility, simplicity of use, and cheap cost compared to general elections, electronic voting is widely utilized in various decisions [11]. Despite this, existing electronic voting methods run the danger of over-authority and manipulated details, limiting fundamental fairness, privacy, secrecy, anonymity, and transparency in the voting process. Most procedures are now centralized, licensed by the critical authority, controlled, measured, and monitored in an electronic voting system, which is a problem for a transparent voting process in and of itself. On the other hand, the electronic voting protocols have a single controller that oversees the whole voting process [12]. This technique leads to erroneous selections due to the central authority’s dishonesty (election commission), which is difficult to rectify using existing methods. The decentralized network may be used as a modern electronic voting technique to circumvent the central authority. Blockchain technology offers a decentralized node for online voting or electronic voting. Recently distributed ledger technologies such blockchain were used to produce electronic voting systems mainly because of their end-to-end verification advantages [13]. Blockchain is an appealing alternative to conventional electronic voting systems with features such as decentralization, non-repudiation, and security protection. It is used to hold both boardroom and public voting [8]. A blockchain, initially a chain of blocks, is a growing list of blocks combined with cryptographic connections. Each block contains a hash, timestamp, and transaction data from the previous block. The blockchain was created to be data-resistant. Voting is a new phase of blockchain technology; in this area, the researchers are trying to leverage benefits such as transparency, secrecy, and non-repudiation that are essential for voting applications [14]. With the usage of blockchain for electronic voting applications, efforts such as utilizing blockchain technology to secure and rectify elections have recently received much attention [15]. The remainder of the paper is organized as follows. Section 2 explains how blockchain technology works, and a complete background of this technology is discussed. How blockchain technology can transfer the electronic voting system is covered in Section 3. In Section 4, the problems and their solutions of developing online voting systems are identified. The security requirements for the electronic voting system are discussed in Section 5, and the possibility of electronic voting on blockchain is detailed in Section 6. Section 7 discusses the available blockchain-based electronic voting systems and analyzes them thoroughly. In Section 8, all information related to the latest literature review is discussed and analyzed deeply. Section 9 addresses the study, open issues, and future trends. Furthermore, in the end, Section 10 concludes this survey. 2. Background The first things that come to mind about the blockchain are cryptocurrencies and smart contracts because of the well-known initiatives in Bitcoin and Ethereum. Bitcoin was the first crypto-currency solution that used a blockchain data structure. Ethereum introduced smart contracts that leverage the power of blockchain immutability and distributed consensus while offering a crypto-currency solution comparable to Bitcoin. The concept of smart contracts was introduced much earlier by Nick Szabo in the 1990s and is described as “a set of promises, specified in digital form, including protocols within which the parties perform on these promises” [16]. In Ethereum, a smart contract is a piece of code deployed to the network so that everyone has access to it. The result of executing this code is verified by a consensus mechanism and by every member of the network as a whole [17]. Today, we call a blockchain a set of technologies combining the blockchain data structure itself, distributed consensus algorithm, public key cryptography, and smart contracts [18]. Below we describe these technologies in more detail. Blockchain creates a series of blocks replicated on a peer-to-peer network. Any block in blockchain has a cryptographic hash and timestamp added to the previous block, as shown in Figure 1. A block contains the Merkle tree block header and several transactions [19]. It is a secure networking method that combines computer science and mathematics to hide data and information from others that is called cryptography. It allows the data to be transmitted securely across the insecure network, in encrypted and decrypted forms [20,21]. As was already mentioned, the blockchain itself is the name for the data structure. All the written data are divided into blocks, and each block contains a hash of all the data from the previous block as part of its data [22]. The aim of using such a data structure is to achieve provable immutability. If a piece of data is changed, the block’s hash containing this piece needs to be recalculated, and the hashes of all subsequent blocks also need to be recalculated [23]. It means only the hash of the latest block has to be used to guarantee that all the data remains unchanged. In blockchain solutions, data stored in blocks are formed from all the validated transactions during their creation, which means no one can insert, delete or alter transactions in an already validated block without it being noticed [24]. The initial zero-block, called the “genesis block,” usually contains some network settings, for example, the initial set of validators (those who issue blocks). Blockchain solutions are developed to be used in a distributed environment. It is assumed that nodes contain identical data and form a peer-to-peer network without a central authority. A consensus algorithm is used to reach an agreement on blockchain data that is fault-tolerant in the presence of malicious actors. Such consensus is called Byzantine fault tolerance, named after the Byzantine Generals’ Problem [25]. Blockchain solutions use different Byzantine fault tolerance (BFT) consensus algorithms: Those that are intended to be used in fully decentralized self-organizing networks, such as cryptocurrency platforms, use algorithms such as proof-of-work or proof-of-stake, where validators are chosen by an algorithm so that it is economically profitable for them to act honestly [26]. When the network does not need to be self-organized, validators can be chosen at the network setup stage [27]. The point is that all validators execute all incoming transactions and agree on achieving results so that more than two-thirds of honest validators need to decide on the outcome. Public key cryptography is used mainly for two purposes: Firstly, all validators own their keypairs used to sign consensus messages, and, secondly, all incoming transactions (requests to modify blockchain data) have to be signed to determine the requester. Anonymity in a blockchain context relates to the fact that anyone wanting to use cryptocurrencies just needs to generate a random keypair and use it to control a wallet linked to a public key [28]. The blockchain solution guarantees that only the keypair owner can manage the funds in the wallet, and this property is verifiable [29,30]. As for online voting, ballots need to be accepted anonymously but only from eligible voters, so a blockchain by itself definitely cannot solve the issue of voter privacy. Smart contracts breathed new life into blockchain solutions. They stimulated the application of blockchain technology in efforts to improve numerous spheres. A smart contract itself is nothing more than a piece of logic written in code. Still, it can act as an unconditionally trusted third party in conjunction with the immutability provided by a blockchain data structure and distributed consensus [31]. Once written, it cannot be altered, and all the network participants verify all steps. The great thing about smart contracts is that anybody who can set up a blockchain node can verify its outcome. As is the case with any other technology, blockchain technology has its drawbacks. Unlike other distributed solutions, a blockchain is hard to scale: An increasing number of nodes does not improve network performance because, by definition, every node needs to execute all transactions, and this process is not shared among the nodes [32]. Moreover, increasing the number of validators impacts performance because it implies a more intensive exchange of messages during consensus. For the same reason, blockchain solutions are vulnerable to various denial-of-service attacks. If a blockchain allows anyone to publish smart contracts in a network, then the operation of the entire network can be disabled by simply putting an infinite loop in a smart contract. A network can also be attacked by merely sending a considerable number of transactions: At some point, the system will refuse to receive anything else. In cryptocurrency solutions, all transactions have an execution cost: the more resources a transaction utilizes, the more expensive it will be, and there is a cost threshold, with transactions exceeding the threshold being discarded. In private blockchain networks [33,34], this problem is solved depending on how the network is implemented via the exact mechanism of transaction cost, access control, or something more suited to the specific context. 2.1. Core Components of Blockchain Architecture These are the main architectural components of Blockchain as shown in Figure 2. 3. How Blockchain Can Transform the Electronic Voting System Blockchain technology fixed shortcomings in today’s method in elections made the polling mechanism clear and accessible, stopped illegal voting, strengthened the data protection, and checked the outcome of the polling. The implementation of the electronic voting method in blockchain is very significant [35]. However, electronic voting carries significant risks such as if an electronic voting system is compromised, all cast votes can probably be manipulated and misused. Electronic voting has thus not yet been adopted on a national scale, considering all its possible advantages. Today, there is a viable solution to overcome the risks and electronic voting, which is blockchain technology. In Figure 4, one can see the main difference between both of the systems. In traditional voting systems, we have a central authority to cast a vote. If someone wants to modify or change the record, they can do it quickly; no one knows how to verify that record. One does not have the central authority; the data are stored in multiple nodes. It is not possible to hack all nodes and change the data. Thus, in this way, one cannot destroy the votes and efficiently verify the votes by tally with other nodes. If the technology is used correctly, the blockchain is a digital, decentralized, encrypted, transparent ledger that can withstand manipulation and fraud. Because of the distributed structure of the blockchain, a Bitcoin electronic voting system reduces the risks involved with electronic voting and allows for a tamper-proof for the voting system. A blockchain-based electronic voting system requires a wholly distributed voting infrastructure. Electronic voting based on blockchain will only work where the online voting system is fully controlled by no single body, not even the government [36]. To sum-up, elections can only be free and fair when there is a broad belief in the legitimacy of the power held by those in positions of authority. The literature review for this field of study and other related experiments may be seen as a good path for making voting more efficient in terms of administration and participation. However, the idea of using blockchain offered a new model for electronic voting. 4. Problems and Solutions of Developing Online Voting Systems Eligibility: Only legitimate voters should be able to take part in voting; Unreusability: Each voter can vote only once; Privacy: No one except the voter can obtain information about the voter’s choice; Fairness: No one can obtain intermediate voting results; Soundness: Invalid ballots should be detected and not taken into account during tallying; Completeness: All valid ballots should be tallied correctly. Below is a brief overview of the solutions for satisfying these properties in online voting systems. 4.1. Eligibility The solution to the issue of eligibility is rather apparent. To take part in online voting, voters need to identify themselves using a recognized identification system. The identifiers of all legitimate voters need to be added to the list of participants. But there are threats: Firstly, all modifications made to the participation list need to be checked so that no illegitimate voters can be added, and secondly, the identification system should be both trusted and secure so that a voter’s account cannot be stolen or used by an intruder. Building such an identification system is a complex task in itself [37]. However, because this sort of system is necessary for a wide range of other contexts, especially related to digital government services, researchers believe it is best to use an existing identification system, and the question of creating one is beyond the scope of work. 4.2. Unreusability At first, glance, implementing unreusability may seem straightforward—when a voter casts their vote, all that needs to be done is to place a mark in the participation list and not allow them to vote a second time. But privacy needs to be taken into consideration; thus, providing both unreusability and voter anonymity is tricky. Moreover, it may be necessary to allow the voter to re-vote, making the task even more complex [38]. A brief overview of unreusability techniques will be provided below in conjunction with the outline on implementing privacy. 4.3. Privacy Privacy in the context of online voting means that no one except the voter knows how a participant has voted. Achieving this property mainly relies on one (or more) of the following techniques: blind signatures, homomorphic encryption, and mix-networks [39]. Blind signature is a method of signing data when the signer does not know what they are signing. It is achieved by using a blinding function so that blinding and signing functions are commutative–Blind(Sign(message)) = Sign(Blind(message)). The requester blinds (applies blinding function to) their message and sends it for signing. After obtaining a signature for a blinded message, they use their knowledge of blinding parameters to derive a signature for an unblinded message. Blind signatures mathematically prevent anyone except the requester from linking a blinded message and a corresponding signature pair with an unblinded one [40]. The voting scheme proposed by Fujioka, Okamoto, and Ohta in 1992 [41] uses a blind signature: An eligible voter blinds his ballot and sends it to the validator. The validator verifies that the voter is allowed to participate, signs the blinded ballot, and returns it to the voter. The voter then derives a signature for the unblinded vote and sends it to the tallier, and the tallier verifies the validator’s signature before accepting the ballot. Many online voting protocols have evolved from this scheme, improving usability (in the original method, the voter had to wait till the end of the election and send a ballot decryption key), allowing re-voting, or implementing coercion resistance. The main threat here is the power of the signer: There must be a verifiable log of all emitted signatures; this information logically corresponds to the receiving of a ballot by the voter, so it should be verified that only eligible voters receive signatures from the signer [42]. It should also be verifiable that accounts of voters who are permitted to vote but have not taken part in voting are not utilized by an intruder. To truly break the link between voter and ballot, the ballot and the signature need to be sent through an anonymous channel [43]. Homomorphic encryption is a form of encryption that allows mathematical operations to be performed on encrypted data without decryption, for example, the addition It is worth mentioning here that multiplicative homomorphic encryption can generally be used as an additive. For example, if we have choices x and y and multiplicative homomorphic encryption, we can select a value g and encrypt exponentiation: Enc(gx) × Enc(gy) = Enc(g(x + y)). Homomorphic encryption can be used to obtain various properties necessary in an online voting system; with regards to privacy, it is used so that only the sum of all the choices is decrypted, and never each voter’s choice by itself. Using homomorphic encryption for privacy implies that decryption is performed by several authorities so that no one can obtain the decryption key; otherwise, privacy will be violated [44]. It is usually implemented with a threshold decryption scheme. For instance, let us say that we have n authorities. To decrypt a result, we need t of them, t <= n. The protocol assumes that each authority applies its vital part to the sum of the encrypted choices. After t authorities perform this operation, we get the decrypted total sum of choices. In contrast to the blind signature scheme, no anonymous channel between voters and the system is needed. Still, privacy relies on trust in the authorities: If a malicious agreement is reached, all voters can be deanonymized. Mix-networks also rely on the distribution of the trust, but in another way. The idea behind a mix-network is that voters’ choices go through several mix-servers that shuffle them and perform an action–either decryption or re-encryption, depending on the mix-network type. In a decryption mix network, each mixing server has its key, and the voter encrypts their choice like an onion so that each server will unwrap its layer of decryption. In re-encryption mix-networks, each mix server re-encrypts the voters’ choices. There are many mix-network proposals, and reviewing all their properties is beyond the scope of this paper. The main point regarding privacy here is that, in theory, if at least one mix-server performs an honest shuffle, privacy is preserved. It is slightly different from privacy based on homomorphic encryption, where we make assumptions about the number of malicious authorities. In addition, the idea behind mix-networks can be used to build anonymous channels required by other techniques [45]. 4.4. Fairness Fairness in terms of no one obtaining intermediate results is achieved straightforwardly: Voters encrypt their choices before sending, and those choices are decrypted at the end of the voting process. The critical thing to remember here is that if someone owns a decryption key with access to encrypted decisions, they can obtain intermediate results. This problem is solved by distributing the key among several keyholders [41]. A system where all the key holders are required for decryption is unreliable—if one of the key holders does not participate, decryption cannot be performed. Therefore, threshold schemes are used whereby a specific number of key holders are required to perform decryption. There are two main approaches for distributing the key: secret sharing, where a trusted dealer divides the generated key into parts and distributes them among key holders (e.g., Shamir’s Secret Sharing protocol); and distributed key generation, where no trusted dealer is needed, and all parties contribute to the calculation of the key (for example, Pedersen’s Distributed Key Generation protocol). 4.5. Soundness and Completeness On the face of it, the completeness and soundness properties seem relatively straightforward, but realizing them can be problematic depending on the protocol. If ballots are decrypted one by one, it is easy to distinguish between valid and invalid ones, but things become more complicated when it comes to homomorphic encryption. As a single ballot is never decrypted, the decryption result will not show if more than one option was chosen or if the poll was formed so that it was treated as ten choices (or a million) at once. Thus, we need to prove that the encrypted data meets the properties of a valid ballot without compromising any information that can help determine how the vote was cast. This task is solved by zero-knowledge proof [46]. By definition, this is a cryptographic method of proving a statement about the value without disclosing the value itself. More specifically, range proofs demonstrate that a specific value belongs to a particular set in such cases. The properties described above are the bare minimum for any voting solution. But all the technologies mentioned above are useless if there is no trust in the system itself. A voting system needs to be fully verifiable to earn this trust, i.e., everyone involved can ensure that the system complies with the stated properties. Ensuring verifiability can be split into two tasks: personal, when the voter can verify that their ballot is correctly recorded and tallied; and universal, when everyone can prove that the system as a whole works precisely [47]. This entails the inputs and outputs of the voting protocol stages being published and proof of correct execution. For example, mix-networks rely on proof of correct shuffling (a type of zero-knowledge proof), while proof of correct decryption is also used in mix-networks and threshold decryption. The more processes that are open to public scrutiny, the more verifiable the system is. However, online voting makes extensive use of cryptography, and the more complex the cryptography, the more obscure it is for most system users [48]. It may take a considerable amount of time to study the protocol and even more to identify any vulnerabilities or backdoors, and even if the entire system is carefully researched, there is no guarantee that the same code is used in real-time. Last but not least are problems associated with coercion and vote-buying. Online voting brings these problems to the next level: As ballots are cast remotely from an uncontrolled environment, coercers and vote buyers can operate on a large scale [49]. That is why one of the desired properties of an online voting system is coercion resistance. It is called resistance because nothing can stop the coercer from standing behind the voter and controlling its actions. The point here is to do as much as possible to lower the risk of mass interference. Both kinds of malefactors—coercers and vote buyers—demand proof of how a voter voted. That is why many types of coercion resistance voting schemes introduce the concept of receipt-freeness. The voter cannot create a receipt that proves how they voted. The approaches to implementing receipt-freeness generally rely on a trusted party—either a system or device that hides the unique parameters used to form a ballot from the voter, so the voter cannot prove that a particular ballot belongs to them [50]. The reverse side of this approach is that if a voter claims that their vote is recorded or tallied incorrectly, they simply cannot prove it due to a lack of evidence. An overview of technologies used to meet the necessary properties of online voting systems and analysis deliberately considered the properties separately [51]. When it comes to assembling the whole protocol, most solutions introduce a trade-off. For example, as noted for the blind signature, there is a risk that non-eligible voters will vote, receipt-freeness contradicts verifiability, a more complex protocol can dramatically reduce usability, etc. Furthermore, the fundamental principles of developing the solution, but many additional aspects must be considered in a real-world system like security and reliability of the communication protocols, system deployment procedure, access to system components [52]. At present, no protocol satisfies all the desired properties and, therefore, no 100% truly robust online voting system exists. 5.1. Anonymity Throughout the polling process, the voting turnout must be secured from external interpretation. Any correlation between registered votes and voter identities inside the electoral structure shall be unknown [20,53]. 5.2. Auditability and Accuracy Accuracy, also called correctness, demands that the declared results correspond precisely to the election results. It means that nobody can change the voting of other citizens, that the final tally includes all legitimate votes [54], and that there is no definitive tally of invalid ballots. 5.3. Democracy/Singularity A “democratic” system is defined if only eligible voters can vote, and only a single vote can be cast for each registered voter [55]. Another function is that no one else should be able to duplicate the vote. 5.4. Vote Privacy After the vote is cast, no one should be in a position to attach the identity of a voter with its vote. Computer secrecy is a fragile type of confidentiality, which means that the voting relationship remains hidden for an extended period as long as the current rate continues to change with computer power and new techniques [56,57]. 5.5. Robustness and Integrity This condition means that a reasonably large group of electors or representatives cannot disrupt the election. It ensures that registered voters will abstain without problems or encourage others to cast their legitimate votes for themselves. The corruption of citizens and officials is prohibited from denying an election result by arguing that some other member has not performed their portion correctly [58]. 5.6. Lack of Evidence While anonymous privacy ensures electoral fraud safeguards, no method can be assured that votes are placed under bribery or election rigging in any way. This question has its root from the start [59]. 5.7. Transparency and Fairness It means that before the count is released, no one can find out the details. It avoids acts such as manipulating late voters’ decisions by issuing a prediction or offering a significant yet unfair benefit to certain persons or groups as to be the first to know [60]. 5.8. Availability and Mobility During the voting period, voting systems should always be available. Voting systems should not limit the place of the vote. 5.9. Verifiable Participation/Authenticity The criterion also referred to as desirability [61] makes it possible to assess whether or not a single voter engaged in the election [62]. This condition must be fulfilled where voting by voters becomes compulsory under the constitution (as is the case in some countries such as Australia, Germany, Greece) or in a social context, where abstention is deemed to be a disrespectful gesture (such as the small and medium-sized elections for a delegated corporate board). 5.10. Accessibility and Reassurance To ensure that everyone who wants to vote has the opportunity to avail the correct polling station and that polling station must be open and accessible for the voter. Only qualified voters should be allowed to vote, and all ballots must be accurately tallied to guarantee that elections are genuine [63]. 5.11. Recoverability and Identification 5.12. Voters Verifiability Verifiability means that processes exist for election auditing to ensure that it is done correctly. Three separate segments are possible for this purpose: (a) uniform verification or public verification [64] that implies that anybody such as voters, governments, and external auditors can test the election after the declaration of the tally; (b) transparent verifiability against a poll [65], which is a weaker prerequisite for each voter to verify whether their vote has been taken into account properly. 6. Electronic Voting on Blockchain This section provides some background information on electronic voting methods. Electronic voting is a voting technique in which votes are recorded or counted using electronic equipment. Electronic voting is usually defined as voting that is supported by some electronic hardware and software. Such regularities should be competent in supporting/implementing various functions, ranging from election setup through vote storage. Kiosks at election offices, laptops, and, more recently, mobile devices are all examples of system types. Voter registration, authentication, voting, and tallying must be incorporated in the electronic voting systems Figure 6. One of the areas where blockchain may have a significant impact is electronic voting. The level of risk is so great that electronic voting alone is not a viable option. If an electronic voting system is hacked, the consequences will be far-reaching. Because a blockchain network is entire, centralized, open, and consensus-driven, the design of a blockchain-based network guarantees that fraud is not theoretically possible until adequately implemented [66]. As a result, the blockchain’s unique characteristics must be taken into account. There is nothing inherent about blockchain technology that prevents it from being used to any other kind of cryptocurrency. The idea of utilizing blockchain technology to create a tamper-resistant electronic/online voting network is gaining momentum [67]. End users would not notice a significant difference between a blockchain-based voting system and a traditional electronic voting system. On the other hand, voting on the blockchain will be an encrypted piece of data that is fully open and publicly stored on a distributed blockchain network rather than a single server. A consensus process on a blockchain mechanism validates each encrypted vote, and the public records each vote on distributed copies of the blockchain ledger [68]. The government will observe how votes were cast and recorded, but this information will not be restricted to policy. The blockchain voting system is decentralized and completely open, yet it ensures that voters are protected. This implies that anybody may count the votes with blockchain electronic voting, but no one knows who voted to whom. Standard electronic voting and blockchain-based electronic voting apply to categorically distinct organizational ideas. 7. Current Blockchain-Based Electronic Voting Systems The following businesses and organizations, founded but mainly formed over the last five years, are developing the voting sector. All share a strong vision for the blockchain network to put transparency into practice. Table 1 shows the different online platforms, their consensus, and the technology used to develop the system. Currently available blockchain-based voting systems have scalability issues. These systems can be used on a small scale. Still, their systems are not efficient for the national level to handle millions of transactions because they use current blockchain frameworks such as Bitcoin, Ethereum, Hyperledger Fabric, etc. In Table 2 we present scalability analysis of famous blockchain platforms. The scalability issue arises with blockchain value suggestions; therefore, altering blockchain settings cannot be easily increased. To scale a blockchain, it is insufficient to increase the block size or lower the block time by lowering the hash complexity. By each approach, the scaling capability hits a limit before it can achieve the transactions needed to compete with companies such as Visa, which manages an average of 150 million transactions per day. Research released by Tata Communications in 2018 has shown that 44% of the companies used blockchain in their survey and refers to general issues arising from the use of new technology. The unresolved scalability issue emerges as a barrier from an architectural standpoint to blockchain adoption and practical implementations. As Deloitte Insights puts it, “blockchain-based systems are comparatively slow. Blockchain’s sluggish transaction speed is a major concern for enterprises that depend on high-performance legacy transaction processing systems.” In 2017 and 2018, the public attained an idea of issues with scalability: significant delays and excessive charging for the Bitcoin network and the infamous Cryptokitties application that clogged the Ethereum blockchain network (a network that thousands of decentralized applications rely on). 7.1. Follow My Vote It is a company that has a secure online voting platform cantered on the blockchain with polling box audit ability to see real-time democratic development [69]. This platform enables the voters to cast their votes remotely and safely and vote for their ideal candidate. It can then use their identification to open the ballot box literally and locate their ballot and check that both that it is correct and that the election results have been proven to be accurate mathematically. 7.2. Voatz This company established a smartphone-based voting system on blockchain to vote remotely and anonymously and verify that the vote was counted correctly [70]. Voters confirm their applicants and themselves on the application and give proof by an image and their identification to include biometric confirmation that either a distinctive signature such as fingerprints or retinal scans. 7.3. Polyas It was founded in Finland in 1996. The company employs blockchain technology to provide the public and private sectors with an electronic voting system [71]. Polyas has been accredited as secure enough by the German Federal Office for Information Security for electronic voting applications in 2016. Many significant companies throughout Germany use Polyas to perform electronic voting systems. Polyas now has customers throughout the United States and Europe. 7.4. Luxoft The first customized blockchain electronic voting system used by a significant industry was developed by the global I.T. service provider Luxoft Harding, Inc., in partnership with the City of Zug and Lucerne University of Applied Sciences of Switzerland [72]. To drive government adoption of blockchain-based services, Luxoft announces its commitment to open source this platform and establishes a Government Alliance Blockchain to promote blockchain use in public institutions. 7.5. Polys Polys is a blockchain-based online voting platform and backed with transparent crypto algorithms. Kaspersky Lab powers them. Polys supports the organization of polls by student councils, unions, and associations and helps them spread electoral information to the students [73]. Online elections with Polys lead to productivity in a community, improve contact with group leaders, and attract new supporters [74]. Polys aims to reduce time and money for local authorities, state governments, and other organizations by helping them to focus on collecting and preparing proposals. 7.6. Agora It is a group that has introduced a blockchain digital voting platform. It was established in 2015 and partially implemented in the presidential election in Sierra Leone in March 2018. Agora’s architecture is built on several technological innovations: a custom blockchain, unique participatory security, and a legitimate consensus mechanism [75]. The vote is the native token in Agora’s ecosystem. It encourages citizens and chosen bodies, serving as writers of elections worldwide to commit to a secure and transparent electoral process. The vote is the Agora ecosystem’s universal token. 8. Related Literature Review Several articles have been published in the recent era that highlighted the security and privacy issues of blockchain-based electronic voting systems. Reflects the comparison of selected electronic voting schemes based on blockchain. The open vote network (OVN) was presented by [76], which is the first deployment of a transparent and self-tallying internet voting protocol with total user privacy by using Ethereum. In OVN, the voting size was limited to 50–60 electors by the framework. The OVN is unable to stop fraudulent miners from corrupting the system. A fraudulent voter may also circumvent the voting process by sending an invalid vote. The protocol does nothing to guarantee the resistance to violence, and the electoral administrator wants to trust [77,78]. Furthermore, since solidity does not support elliptic curve cryptography, they used an external library to do the computation [79]. After the library was added, the voting contract became too big to be stored on the blockchain. Since it has occurred throughout the history of the Bitcoin network, OVN is susceptible to a denial-of-service attack [80]. Table 3 shows the main comparison of selected electronic voting schemes based on blockchain. Lai et al. [81] suggested a decentralized anonymous transparent electronic voting system (DATE) requiring a minimal degree of confidence between participants. They think that for large-scale electronic elections, the current DATE voting method is appropriate. Unfortunately, their proposed system is not strong enough to secure from DoS attacks because there was no third-party authority on the scheme responsible for auditing the vote after the election process. This system is suitable only for small scales because of the limitation of the platform [8]. Although using Ring Signature keeps the privacy of individual voters, it is hard to manage and coordinate several signer entities. They also use PoW consensus, which has significant drawbacks such as energy consumption: the “supercomputers” of miners monitor a million computations a second, which is happening worldwide. Because this arrangement requires high computational power, it is expensive and energy-consuming. Shahzad et al. [2] proposed the BSJC proof of completeness as a reliable electronic voting method. They used a process model to describe the whole system’s structure. On a smaller scale, it also attempted to address anonymity, privacy, and security problems in the election. However, many additional problems have been highlighted. The proof of labor, for example, is a mathematically vast and challenging job that requires a tremendous amount of energy to complete. Another problem is the participation of a third party since there is a significant risk of data tampering, leakage, and unfair tabulated results, all of which may impact end-to-end verification. On a large scale, generating and sealing the block may cause the polling process to be delayed [8]. Gao et al. [8] has suggested a blockchain-based anti-quantum electronic voting protocol with an audit function. They have also made modifications to the code-based Niederreiter algorithm to make it more resistant to quantum assaults. The Key Generation Center (KGC) is a certificateless cryptosystem that serves as a regulator. It not only recognizes the voter’s anonymity but also facilitates the audit’s functioning. However, an examination of their system reveals that, even if the number of voters is modest, the security and efficiency benefits are substantial for a small-scale election. If the number is high, some of the efficiency is reduced to provide better security [82]. Yi [83] presented the blockchain-based electronic voting Scheme (BES) that offered methods for improving electronic voting security in the peer-to-peer network using blockchain technology. A BES is based on the distributed ledger (DLT) may be employed to avoid vote falsification. The system was tested and designed on Linux systems in a P2P network. In this technique, counter-measurement assaults constitute a significant issue. This method necessitates the involvement of responsible third parties and is not well suited to centralized usage in a system with many agents. A distributed process, i.e., the utilization of secure multipart computers, may address the problem. However, in this situation, computing expenses are more significant and maybe prohibitive if the calculation function is complex and there are too many participants. [84,85]. Khan, K.M. [86] has proposed block-based e-voting architecture (BEA) that conducted strict experimentation with permissioned and permissionless blockchain architectures through different scenarios involving voting population, block size, block generation rate, and block transaction speed. Their experiments also uncovered fascinating findings of how these parameters influence the overall scalability and reliability of the electronic voting model, including interchanges between different parameters and protection and performance measures inside the organization alone. In their scheme, the electoral process requires the generation of voter addresses and candidate addresses. These addresses are then used to cast votes from voters to candidates. The mining group updates the ledger of the main blockchain to keep track of votes cast and the status of the vote. The voting status remains unconfirmed until a miner updates the main ledger. The vote is then cast using the voting machine at the polling station. However, in this model, there are some flaws found. There is no regulatory authority to restrict invalid voters from casting a vote, and it is not secure from quantum attach. Their model is not accurate and did not care about voter’s integrity. Moreover, their scheme using Distributed consensus in which testimonies (data and facts) can be organized into cartels because fewer people keep the network active, a “51%” attack becomes easier to organize. This attack is potentially more concentrated and did not discuss scalability and delays in electronic voting, which are the main concerns about the blockchain voting system. They have used the Multichain framework, a private blockchain derived from Bitcoin, which is unsuitable for the nationwide voting process. As the authors mentioned, their system is efficient for small and medium-sized voting environments only. 9. Discussion and Future Work Many issues with electronic voting can be solved using blockchain technology, which makes electronic voting more cost-effective, pleasant, and safe than any other network. Over time, research has highlighted specific problems, such as the need for further work on blockchain-based electronic voting and that blockchain-based electronic voting schemes have significant technical challenges. 9.1. Scalability and Processing Overheads For a small number of users, blockchain works well. However, when the network is utilized for large-scale elections, the number of users increases, resulting in a higher cost and time consumption for consuming the transaction. Scalability problems are exacerbated by the growing number of nodes in the blockchain network. In the election situation, the system’s scalability is already a significant issue [87]. An electronic voting integration will further impact the system’s scalability based on blockchain [88,89]. Table 3 elucidates different metrics or properties inherent to all blockchain frameworks and presents a comparative analysis of some blockchain-based platforms such as Bitcoin, Ethereum, Hyperledger Fabric, Litecoin, Ripple, Dogecoin, Peercoin, etc. One way to enhance blockchain scaling would be to parallelize them, which is called sharding. In a conventional blockchain network, transactions and blocks are verified by all the participating nodes. In order to enable high concurrency in data, the data should be horizontally partitioned into parts, each known as a shard. 9.2. User Identity As a username, blockchain utilizes pseudonyms. This strategy does not provide complete privacy and secrecy. Because the transactions are public, the user’s identity may be discovered by examining and analyzing them. The blockchain’s functionality is not well suited to national elections [90]. 9.3. Transactional Privacy In blockchain technology, transactional anonymity and privacy are difficult to accomplish [91]. However, transactional secrecy and anonymity are required in an election system due to the presence of the transactions involved. For this purpose, a third-party authority required but not centralized, this third-party authority should check and balance on privacy. 9.4. Energy Efficiency Blockchain incorporates energy-intensive processes such as protocols, consensus, peer-to-peer communication, and asymmetrical encryption. Appropriate energy-efficient consensus methods are a need for blockchain-based electronic voting. Researchers suggested modifications to current peer-to-peer protocols to make them more energy-efficient [92,93]. 9.5. Immatureness Blockchain is a revolutionary technology that symbolizes a complete shift to a decentralized network. It has the potential to revolutionize businesses in terms of strategy, structure, processes, and culture. The current implementation of blockchain is not without flaws. The technology is presently useless, and there is little public or professional understanding about it, making it impossible to evaluate its future potential. All present technical issues in blockchain adoption are usually caused by the technology’s immaturity [94]. 9.6. Acceptableness While blockchain excels at delivering accuracy and security, people’s confidence and trust are critical components of effective blockchain electronic voting [95]. The intricacy of blockchain may make it difficult for people to accept blockchain-based electronic voting, and it can be a significant barrier to ultimately adopting blockchain-based electronic voting in general public acceptance [96]. A big marketing campaign needed for this purpose to provide awareness to people about the benefits of blockchain voting systems, so that it will be easy for them to accept this new technology. 9.7. Political Leaders’ Resistance Central authorities, such as election authorities and government agencies, will be shifted away from electronic voting based on blockchain. As a result, political leaders who have profited from the existing election process are likely to oppose the technology because blockchain will empower social resistance through decentralized autonomous organizations [97]. 10. Conclusions The goal of this research is to analyze and evaluate current research on blockchain-based electronic voting systems. The article discusses recent electronic voting research using blockchain technology. The blockchain concept and its uses are presented first, followed by existing electronic voting systems. Then, a set of deficiencies in existing electronic voting systems are identified and addressed. The blockchain’s potential is fundamental to enhance electronic voting, current solutions for blockchain-based electronic voting, and possible research paths on blockchain-based electronic voting systems. Numerous experts believe that blockchain may be a good fit for a decentralized electronic voting system. Furthermore, all voters and impartial observers may see the voting records kept in these suggested systems. On the other hand, researchers discovered that most publications on blockchain-based electronic voting identified and addressed similar issues. There have been many study gaps in electronic voting that need to be addressed in future studies. Scalability attacks, lack of transparency, reliance on untrustworthy systems, and resistance to compulsion are all potential drawbacks that must be addressed. As further research is required, we are not entirely aware of all the risks connected with the security and scalability of blockchain-based electronic voting systems. Adopting blockchain voting methods may expose users to unforeseen security risks and flaws. Blockchain technologies require a more sophisticated software architecture as well as managerial expertise. The above-mentioned crucial concerns should be addressed in more depth during actual voting procedures, based on experience. As a result, electronic voting systems should initially be implemented in limited pilot areas before being expanded. Many security flaws still exist in the internet and polling machines. Electronic voting over a secure and dependable internet will need substantial security improvements. Despite its appearance as an ideal solution, the blockchain system could not wholly address the voting system’s issues due to these flaws. This research revealed that blockchain systems raised difficulties that needed to be addressed and that there are still many technical challenges. That is why it is crucial to understand that blockchain-based technology is still in its infancy as an electronic voting option. Acknowledgments This research was funded by the Malaysia Ministry of Education (FRGS/1/2019/ICT01/UKM/01/2) and Universiti Kebangsaan Malaysia (PP-FTSM-2021). Author Contributions Conceptualization, U.J., M.J.A.A. and Z.S.; methodology, U.J., M.J.A.A. and Z.S.; formal analysis, U.J., M.J.A.A. and Z.S.; writing—original draft preparation, U.J. and M.J.A.A.; writing—review and editing, U.J.; supervision, M.J.A.A. and Z.S. All authors have read and agreed to the published version of the manuscript. Funding This research received no external funding. Institutional Review Board Statement Not applicable. Informed Consent Statement Not applicable. Data Availability Statement Not applicable. Conflicts of Interest The authors declare no conflict of interest. Footnotes Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. 62. Jafar U., Aziz M.J.A. A State of the Art Survey and Research Directions on Blockchain Based Electronic Voting System; Proceedings of the International Conference on Advances in Cyber Security; Penang, Malaysia. 8–9 December 2020. [Google Scholar] 79. Woda M., Huzaini Z. A Proposal to Use Elliptical Curves to Secure the Block in E-voting System Based on Blockchain Mechanism; Proceedings of the International Conference on Dependability and Complex Systems; Wrocław, Poland. 28 June–2 July 2021. [Google Scholar]
For the same reason, blockchain solutions are vulnerable to various denial-of-service attacks. If a blockchain allows anyone to publish smart contracts in a network, then the operation of the entire network can be disabled by simply putting an infinite loop in a smart contract. A network can also be attacked by merely sending a considerable number of transactions: At some point, the system will refuse to receive anything else. In cryptocurrency solutions, all transactions have an execution cost: the more resources a transaction utilizes, the more expensive it will be, and there is a cost threshold, with transactions exceeding the threshold being discarded. In private blockchain networks [33,34], this problem is solved depending on how the network is implemented via the exact mechanism of transaction cost, access control, or something more suited to the specific context. 2.1. Core Components of Blockchain Architecture These are the main architectural components of Blockchain as shown in Figure 2. 3. How Blockchain Can Transform the Electronic Voting System Blockchain technology fixed shortcomings in today’s method in elections made the polling mechanism clear and accessible, stopped illegal voting, strengthened the data protection, and checked the outcome of the polling. The implementation of the electronic voting method in blockchain is very significant [35]. However, electronic voting carries significant risks such as if an electronic voting system is compromised, all cast votes can probably be manipulated and misused. Electronic voting has thus not yet been adopted on a national scale, considering all its possible advantages. Today, there is a viable solution to overcome the risks and electronic voting, which is blockchain technology. In Figure 4, one can see the main difference between both of the systems. In traditional voting systems, we have a central authority to cast a vote. If someone wants to modify or change the record, they can do it quickly; no one knows how to verify that record. One does not have the central authority; the data are stored in multiple nodes. It is not possible to hack all nodes and change the data. Thus, in this way, one cannot destroy the votes and efficiently verify the votes by tally with other nodes. If the technology is used correctly, the blockchain is a digital, decentralized, encrypted, transparent ledger that can withstand manipulation and fraud.
no
Ethnobotany
Can Echinacea prevent colds?
yes_statement
"echinacea" can "prevent" "colds".. cold "prevention" is possible with "echinacea".
https://www.nccih.nih.gov/health/tips/tips-natural-products-for-the-flu-and-colds-what-does-the-science-say
5 Tips: Natural Products for the Flu and Colds: What Does the ...
5 Tips: Natural Products for the Flu and Colds: What Does the Science Say? 5 Tips: Natural Products for the Flu and Colds: What Does the Science Say? It’s that time of year again—cold and flu season. Each year, approximately 5 to 20 percent of Americans come down with the flu. Although most recover without incident, flu-related complications typically lead to at least 200,000 hospitalizations and between 12,000 and 60,000 deaths each year. Colds generally do not cause serious complications, but they are among the leading reasons for visiting a doctor and for missing school or work. Some people try natural products such as herbs or vitamins and minerals to prevent or treat these illnesses. But do they really work? What does the science say? Vaccination is the best protection against getting the flu. Starting in 2010, the Federal Government’s Centers for Disease Control and Prevention has recommended annual flu vaccination for all people aged 6 months and older.  There is currently no strong scientific evidence that any natural product is useful against the flu. Zinc taken orally (by mouth) may help to treat colds, but it can cause side effects and interact with medicines. Zinc is available in two forms—oral zinc (e.g., lozenges, tablets, syrup) and intranasal zinc (e.g., swabs and gels). A 2015 analysis of clinical trials found that oral zinc helps to reduce the length of colds when taken within 24 hours after symptoms start. Intranasal zinc has been linked to a severe side effect (irreversible loss of the sense of smell) and should not be used.  A note about safety: Oral zinc can cause nausea and other gastrointestinal symptoms. Long-term use of zinc, especially in high doses, can cause problems such as copper deficiency. Zinc may interact with drugs, including antibiotics and penicillamine (a drug used to treat rheumatoid arthritis). Vitamin C does not prevent colds and only slightly reduces their length and severity. A 2013 review of scientific literature found that taking vitamin C regularly did not reduce the likelihood of getting a cold but was linked to small improvements in cold symptoms. In studies in which people took vitamin C only after they got a cold, vitamin C did not improve their symptoms.  A note about safety: Vitamin C is generally considered safe; however, high doses can cause digestive disturbances such as diarrhea and nausea. Echinacea has not been proven to help prevent or treat colds.Echinacea is an herbal supplement that some people use to treat or prevent colds. Echinacea products vary widely, containing different species, parts, and preparations of the echinacea plant. Reviews of research have found limited evidence that some echinacea preparations may be useful for treating colds in adults, while other preparations did not seem to be helpful. In addition, echinacea has not been shown to reduce the number of colds that adults catch. Only a small amount of research on echinacea has been done in children, and the results of that research are inconsistent.  A note about safety: Few side effects have been reported in clinical trials of echinacea; however, some people may have allergic reactions. In one large clinical trial in children, those who took echinacea had an increased risk of developing rashes. The evidence that probiotic supplements may help to prevent colds is weak, and little is known about their long-term safety. Probiotics are a type of “good bacteria,” similar to the microorganisms found in the body, and may be beneficial to health. Probiotics are available as dietary supplements and yogurts, as well as other products such as suppositories and creams. Although a 2015 analysis of research indicated that probiotics might help to prevent upper respiratory tract infections, such as the common cold, the evidence is weak and the results have limitations.  A note about safety: Little is known about the effects of taking probiotics for long periods of time. Most people may be able to use probiotics without experiencing any side effects—or with only mild gastrointestinal side effects such as gas —but there have been some case reports of serious side effects. Probiotics should not be used by people with serious underlying health problems except with close monitoring by a health care provider.
Vitamin C does not prevent colds and only slightly reduces their length and severity. A 2013 review of scientific literature found that taking vitamin C regularly did not reduce the likelihood of getting a cold but was linked to small improvements in cold symptoms. In studies in which people took vitamin C only after they got a cold, vitamin C did not improve their symptoms.  A note about safety: Vitamin C is generally considered safe; however, high doses can cause digestive disturbances such as diarrhea and nausea. Echinacea has not been proven to help prevent or treat colds. Echinacea is an herbal supplement that some people use to treat or prevent colds. Echinacea products vary widely, containing different species, parts, and preparations of the echinacea plant. Reviews of research have found limited evidence that some echinacea preparations may be useful for treating colds in adults, while other preparations did not seem to be helpful. In addition, echinacea has not been shown to reduce the number of colds that adults catch. Only a small amount of research on echinacea has been done in children, and the results of that research are inconsistent.  A note about safety: Few side effects have been reported in clinical trials of echinacea; however, some people may have allergic reactions.  In one large clinical trial in children, those who took echinacea had an increased risk of developing rashes. The evidence that probiotic supplements may help to prevent colds is weak, and little is known about their long-term safety. Probiotics are a type of “good bacteria,” similar to the microorganisms found in the body, and may be beneficial to health. Probiotics are available as dietary supplements and yogurts, as well as other products such as suppositories and creams. Although a 2015 analysis of research indicated that probiotics might help to prevent upper respiratory tract infections, such as the common cold, the evidence is weak and the results have limitations.
no
Ethnobotany
Can Echinacea prevent colds?
yes_statement
"echinacea" can "prevent" "colds".. cold "prevention" is possible with "echinacea".
https://academic.oup.com/cid/article/38/10/1367/344444
Echinacea purpurea for Prevention of Experimental Rhinovirus Colds
Abstract A randomized, double-blind, placebo-controlled clinical trial was conducted to evaluate the ability of Echinacea purpurea to prevent infection with rhinovirus type 39 (RV-39). Forty-eight previously healthy adults received echinacea or placebo, 2.5 mL 3 times per day, for 7 days before and 7 days after intranasal inoculation with RV-39. Symptoms were assessed to evaluate clinical illness. Viral culture and serologic studies were performed to evaluate the presence of rhinovirus infection. A total of 92% of echinacea recipients and 95% of placebo recipients were infected. Colds developed in 58% of echinacea recipients and 82% of placebo recipients (P = .114, by Fisher's exact test). Administration of echinacea before and after exposure to rhinovirus did not decrease the rate of infection; however, because of the small sample size, statistical hypothesis testing had relatively poor power to detect statistically significant differences in the frequency and severity of illness. Colds are the most common acute infectious illnesses in humans. Prevention of the common cold with immunization is not practical because of the antigenic diversity of the many viruses causing colds. For example, rhinoviruses, which account for ∼40% of adult colds, have >100 antigenic serotypes. Viruses of different and distinct families, such as coronavirus, parainfluenza virus, respiratory syncytial virus, influenza virus, adenovirus, and metapneumovirus, also cause colds. Products derived from Echinacea purpurea, the purple coneflower, are among the most popular herbal remedies in the United States. It is estimated that Americans spend more than $300 million/year on these products [1], which are commonly self-administered for the prevention and treatment of the common cold. The 3 most commonly used species for medicinal purposes are E. purpurea, Echinacea pallida, and Echinacea angustifola. A number of studies, which used a variety of plant parts (such as root or above-ground components) from different species of echinacea, alone or combined with other herbs or echinacea species, have been conducted to evaluate the effects on prevention and treatment of naturally occurring colds [2–21]. These studies have also used various methodologies, end points, and definitions of colds. The experimental rhinovirus model, in which the viral etiology of each participant's cold is known, reduces some of the variability and allows for more-exact measurement of effect on infection rates, in addition to measuring clinical illness rates and severity [22, 23]. Turner et al. [2] recently reported the results of a study in which volunteers were pretreated with echinacea or placebo for 14 days before challenge with rhinovirus type 23 (RV-23), with continuation of treatment for 5 days after virus challenge. In their induced-cold study, no significant differences were observed in the rate of rhinovirus infection or illness. Among the most commonly used formulations of E. purpurea is the pressed juice of the above-ground parts of the herb (EchinaGuard; known as “Echinacin” in Germany; Madaus Aktiengesellschaft). This preparation has previously been studied by Grimm and Muller [3] for the prevention of natural colds. We conducted a double-blind, randomized, placebo-controlled study to evaluate the efficacy of this preparation of echinacea to reduce the rate of infection and illness in volunteers when administered for 7 days before and 7 days after challenge with rhinovirus type 39 (RV-39). Subjects and Methods Subjects. Forty-eight healthy adult volunteers aged 18–65 years with serum neutralizing antibody titers of ⩽1 : 2 to RV-39 were recruited. The study was approved by an independent institutional review board, and all volunteers gave written informed consent for participation. Individuals with conditions likely to affect susceptibility to colds or the severity or duration of cold symptoms were excluded from the study. Individuals who had received medication known to affect rhinorrhea, cough, or nasal congestion within 7 days (4 weeks for cromolyn sodium and long-acting antihistamines) before study initiation were excluded. Pregnant or breast-feeding women and participants who reported sensitivity to any of the ingredients in the study product were also excluded. Participants received financial compensation. Treatments. One group of participants received a formulation containing the pressed juice of the above-ground plant parts of E. purpurea placed in a 22% alcohol base (EchinaGuard), and another group received a matching placebo. The active medication and placebo were identical in appearance, taste, and smell and were packaged in identical 100-mL bottles. Experimental design. Participants were randomized to receive either echinacea or placebo, 2.5 mL 3 times per day (every ∼6–8 h) for 14 days. After 7 days, participants returned in the early morning (virus inoculation day 1 [V1]) for inoculation with RV-39 administered intranasally via pipette in 2 inocula provided ∼30 min apart (total dose, 0.25 mL per nostril), with the participant in a supine position. Each participant was asked not to blow his or her nose for 30 min after viral challenge. The virus originated from a clinical isolate and was kindly provided by A. M. Before use, the virus pool was tested for safety. The total virus inoculum was equivalent to ∼300 TCID50 per volunteer. Beginning 24 h after virus inoculation (i.e., on V2) and continuing through V4, participants were isolated in individual hotel rooms for assessment. During their hotel stay, the participants continued treatment with echinacea or placebo as previously instructed. On V5–V7, participants completed treatment at home. Identification of infection. Serological assessments of serum neutralizing antibody titer to RV-39 were made on V1 (before virus inoculation) and at V21–V25, as described elsewhere [24]. Specimens for viral culture were obtained by nasal lavage during the subject's hotel stay (V2–V4) to identify the presence of rhinovirus. Infection was defined as at least a 4-fold increase in RV-39 neutralizing antibody titer and/or recovery of rhinovirus on viral culture. Clinical measurements of illness. The occurrence and severity of symptoms were recorded 3 times daily on a diary card beginning on V1 (the day of virus inoculation) and continuing through V7 using a 4-point severity rating scale (0, absent; 1, mild; 2, moderate; and 3, severe). The symptoms assessed were rhinorrhea, congestion, sneezing, cough, sore throat, headache, malaise, and chilliness. Thereafter, symptoms were assessed once per day until the completion of the study (V21–V25). The maximum scores of the 3 daily assessments for each of the 8 individual symptoms on V1–V5 after rhinovirus inoculation were added to give a 5-day total symptom score. Clinical illness—that is, the presence of a cold—was defined as a 5-day total symptom score of ⩾5 and 1 or both of the following: 3 successive days of rhinorrhea or a positive response to the query on whether the subject felt he or she had developed a cold since virus inoculation. This method of symptom scoring and of diagnosing illness is a modification of the methods of Jackson et al. [25] and Gwaltney et al. [26] described elsewhere. Data analysis. Demographic parameters were tested for treatment group differences by Student's t test or χ2 analysis, as appropriate. Ninety-five percent CIs were constructed on the proportion of each treatment group who met the criteria for infection, as well as the difference in proportions between the 2 treatment groups. The 95% CIs for differences were based on the normal approximation to the binomial distribution. The treatment proportions were also compared using Fisher's exact test and χ2 analysis. The study was designed as the first stage of a 2-stage adaptive design based on the methodology described by Bauer and Köhne [27]. The primary end point determined a priori was development of a cold, defined separately as laboratory infection and clinical illness. When ∼50 subjects had completed the study and clinical and virologic end points were known for each subject, an adaptive interim statistical analysis of the data was performed to determine a final sample size for the study and to redefine the primary efficacy parameter. The critical P level for rejection of the null hypothesis at completion of stage 1 was .0087. If the P value at stage 1 (P1) exceeded the critical level, the primary outcome criterion for stage 2 could be revised on the basis of the outcome showing the greatest sensitivity for resolving treatment differences in stage 1. The power of a χ2 test for stage 2 was to be computed on the basis of a sample size of 75 subjects per treatment group, the outcome rates as observed in stage 1, and an α level of 0.0087/P1. Otherwise, the study was to be terminated at stage 1. When infection was used as the primary end point, the power for stage 2 was estimated to be 5%. When the clinical criterion of illness was used, the power for stage 2 was estimated to be 94%. In view of the primary importance assigned by the study sponsor to the infection criterion, the sponsor decided to terminate the study at stage 1. On the basis of the estimates of the 7-day area-under-the-curve values of the total symptom scores, which showed mean symptom scores of 9.34 for echinacea and 12.17 for placebo, and a pooled SD of 9.45, a sample size of 175 subjects per treatment group would have been required to provide 80% power to detect a statistically significant difference at the 5% level of significance. Results Participants. A total of 48 volunteers, 24 in each treatment group, were enrolled and randomized to receive a study drug. All participants completed the study. Two participants, both of whom were from the placebo group, were excluded from the efficacy analysis. One participant had an entry of moderate sneezing in her daily diary before virus inoculation, and 1 participant had a positive serum neutralizing antibody titer at inoculation. There were no significant differences between the echinacea and placebo groups with regard to sex (50% vs. 42% male subjects) or mean age (±SD) (33 ± 13 vs. 33 ± 12 years). Assessment of infection. Fourfold or greater increases in serum neutralizing antibody titers to RV-39 occurred in 58% of echinacea recipients and in 55% of placebo recipients. Rhinovirus was recovered from 88% and 95% of the volunteers in the echinacea and placebo groups, respectively. The frequency of virus recovery was the same in both groups (82% of cultures were positive for rhinovirus). Overall, the proportion of participants who demonstrated laboratory evidence of infection was 92% for echinacea recipients (95% CI, 73–99) and 96% for placebo recipients (95% CI, 77–100). Assessment of illness. Colds developed in 58% of the echinacea recipients (95% CI, 37–78) and 82% of the placebo recipients (95% CI, 60–94) (P = .114, by Fisher's exact test). The difference in rates was 24% (range, -2 to 49). The total 7-day symptom score (±SD) was 9.34 ± 9.43 for the recipients of echinacea and 12.17 ± 9.56 for placebo recipients. Similarly, daily symptom scores tended to be lower in echinacea recipients on days 2–7 after RV-39 inoculation than they were in placebo recipients, but the differences were not significant (figure 1). Individual symptom scores were not significantly different between treatment groups (table 1). Of those infected with RV-39, 59% of the echinacea recipients developed colds, compared with 86% of the placebo recipients (P = .0883, by Fisher's exact test). Seven-day individual and total symptom scores after challenge with rhinovirus type 39 in volunteers who received either echinacea or placebo. Tolerance. Six participants (4 in the placebo group and 2 in the echinacea group) reported a total of 8 adverse events. There were no treatment-limiting adverse events. The 2 adverse events reported by subjects treated with echinacea were sleeplessness and severe oral aphthous ulcers, which resolved spontaneously while receiving treatment. Both events were evaluated to be not or improbably related to the study treatment. Discussion E. purpurea, which is one of the most commonly used herbal remedies in the United States, is often ingested to prevent or ameliorate the course of the common cold. In this controlled trial, we used a challenge model to study the effects of the pressed juice of the above-ground plant parts of E. purpurea, administered for 7 days before and for 7 days after RV-39 inoculation, on rhinovirus colds. The results of the study suggest that echinacea was not effective for preventing rhinovirus infection as defined by laboratory criteria. Among those who were infected and receiving echinacea, there was a trend toward reduction in the number of clinical colds, compared with those who were infected and received placebo (59% vs. 86%; P = .0883). A number of studies that used varying study designs and a variety of plant parts, such as root or above-ground components from different species of echinacea alone or in combination with other herbs, have reported various effects—although mostly not significant—on the prevention of natural colds. Forth and Beuscher [4] studied the effects of a tablet and liquid product containing the roots of E. angustifolia and E. pallida with other extracts on the self-reported incidence and severity of natural colds. Tablet recipients had 38% fewer nasal symptoms than did placebo recipients, but other outcomes were similar [5]. Schmidt et al. [6] studied a preparation of E. angustifolia herb and root and other extracts that was administered to 646 college students for 8 weeks to prevent upper respiratory tract infection and flulike illness. They reported a 15% reduction in illness, which did not achieve statistical significance [5]. A 3-arm study conducted by Melchart et al. [7] compared 12-week regimens of extracts of the roots of E. purpurea and E. angustifolia with placebo for prevention of colds in 302 volunteers. They observed no significant differences in the rate of infection or time to first infection. Grimm and Muller [3] reported a trial of the pressed juice of the above-ground parts of E. purpurea ingested for 8 weeks to prevent natural colds. There were no significant differences in incidence, severity, or duration of colds between echinacea and matched placebo in 108 subjects. Schoneberger [28] had previously reported results from the same trial. In the only previously published study to evaluate echinacea in experimental colds, Turner et al. [2] administered echinacea for 14 days before and 5 days after challenge with RV-23 and observed no benefit. Numerous studies have evaluated echinacea for the treatment of established colds [8–18]. Varying definitions of respiratory illness, end points, methods of data collection, and differences in the time of initiation of treatment in the course of illness make comparisons of studies difficult. Furthermore, preparations of echinacea used in various studies may have significantly different amounts of active ingredient [20, 21]. Clinical efficacy (although modest in some cases) has been reported in many of the treatment studies, suggesting echinacea is associated with a greater benefit for treating established colds than for preventing infection. A recently published controlled trial by Barrett et al. [18] reported no beneficial effect of a mixture of unrefined E. purpurea herb and root and E. angustifolia root in the treatment of natural colds. In our study, we observed trends toward reduction in the total symptom score (by 23%) and the frequency of illnesses meeting the definition of a cold (by 29%–31%) in echinacea recipients. One explanation for the trends toward improvement in clinical illness without an effect on the rate of infection is that these observations may have been the result of a beneficial effect of echinacea associated with the treatment of established infections rather than with prevention, because therapy was continued for 7 days after virus inoculation, at a time when subjects were symptomatic. Use of the experimental cold model may have allowed for early symptom assessment that was more accurate than that for trials involving natural colds [23]. Our study was unfortunately compromised by its small sample size. The results are consistent with most of the previously reported data with regard to the lack of efficacy of echinacea to prevent natural or experimental colds. Further investigation of echinacea for treatment of experimental rhinovirus infections, with a larger number of subjects and with specific standardized preparations of echinacea of known potency, should clarify the efficacy of echinacea in the treatment of colds. Acknowledgments We thank Dr. Gary B. Munk for technical expertise and Dr. Lisa A. Goldman for assistance in the preparation of the manuscript.
Schoneberger [28] had previously reported results from the same trial. In the only previously published study to evaluate echinacea in experimental colds, Turner et al. [2] administered echinacea for 14 days before and 5 days after challenge with RV-23 and observed no benefit. Numerous studies have evaluated echinacea for the treatment of established colds [8–18]. Varying definitions of respiratory illness, end points, methods of data collection, and differences in the time of initiation of treatment in the course of illness make comparisons of studies difficult. Furthermore, preparations of echinacea used in various studies may have significantly different amounts of active ingredient [20, 21]. Clinical efficacy (although modest in some cases) has been reported in many of the treatment studies, suggesting echinacea is associated with a greater benefit for treating established colds than for preventing infection. A recently published controlled trial by Barrett et al. [18] reported no beneficial effect of a mixture of unrefined E. purpurea herb and root and E. angustifolia root in the treatment of natural colds. In our study, we observed trends toward reduction in the total symptom score (by 23%) and the frequency of illnesses meeting the definition of a cold (by 29%–31%) in echinacea recipients. One explanation for the trends toward improvement in clinical illness without an effect on the rate of infection is that these observations may have been the result of a beneficial effect of echinacea associated with the treatment of established infections rather than with prevention, because therapy was continued for 7 days after virus inoculation, at a time when subjects were symptomatic. Use of the experimental cold model may have allowed for early symptom assessment that was more accurate than that for trials involving natural colds [23]. Our study was unfortunately compromised by its small sample size. The results are consistent with most of the previously reported data with regard to the lack of efficacy of echinacea to prevent natural or experimental colds.
no
Ethnobotany
Can Echinacea prevent colds?
yes_statement
"echinacea" can "prevent" "colds".. cold "prevention" is possible with "echinacea".
https://wwwnc.cdc.gov/travel/yellowbook/2024/preparing/complementary-and-integrative
Complementary & Integrative Health Approaches to Travel Wellness ...
Note: Javascript is disabled or is not supported by your browser. For this reason, some items on this page will be unavailable. For more information about this message, please visit this page: About CDC.gov. Complementary & Integrative Health Approaches to Travel Wellness CDC Yellow Book 2024 Travelers often ask their health care providers about the use of complementary or integrative health approaches for travel-related illnesses and conditions. Claims made about dietary supplements, herbal products (see Box 2-15), and other complementary approaches for travel-related health problems may not be supported by evidence. Be prepared to discuss what is known about the reported benefits of complementary and integrative health approaches and to counsel travelers on their possible side effects or interactions with prescribed vaccines or medications. Box 2-15 Dietary supplements & unproven therapies Unproven therapies are discussed in this chapter only for educational purposes and are not recommended for use. The Centers for Disease Control and Prevention only endorses therapies approved by the US Food and Drug Administration (FDA). FDA regulates dietary supplements, but the regulations are generally less strict than those for prescription or over-the-counter drugs. Learn more. Two major safety concerns about dietary supplements are potential drug interactions and product contamination. Analyses of supplements sometimes find differences between labeled and actual ingredients. For example, products marketed as dietary supplements have been found to contain illegal hidden ingredients, such as prescription drugs. Claims Versus Science Altitude Illness Many natural products, including coca leaf, garlic, Ginkgo biloba, and vitamin E, have been promoted for preventing or treating altitude illness. For more information on altitude illness, see Sec. 4, Ch. 5, High Elevation Travel & Altitude Illness. Coca Leaf Coca leaf, chewed or made into tea, has been used for altitude illness, but no strong evidence has shown that it works or that it has adverse effects. Travelers should be aware that using coca leaf will cause a positive drug test result for cocaine metabolites. Garlic No evidence supports claims that garlic helps reduce altitude illness. Garlic supplements appear safe for most adults. Possible side effects include breath and body odor, heartburn, and upset stomach. Some people have allergic reactions to garlic. Short-term use of most commercially available garlic supplements poses only a limited risk for drug interactions. Ginkgo Biloba Studies of Ginkgo biloba for preventing altitude illness are inadequate to justify recommendations about its use. Products made from standardized ginkgo leaf extracts appear to be safe when used as directed. However, ginkgo can increase the risk of bleeding in some people and interact with anticoagulants. In addition, studies by the National Toxicology Program showed that rodents developed liver and thyroid tumors after being given a ginkgo extract for up to 2 years. Vitamin E One study investigated vitamin E, in combination with other antioxidants, for altitude illness; no significant benefit was observed. Colds & Flu Although colds and flu are not uniquely travel-related hazards, many people try to avoid these illnesses during a trip. Complementary health approaches that have been advocated for preventing or treating colds or influenza include echinacea, garlic and other herbs, nasal saline irrigation, probiotics, vitamin C, zinc products, and others. Echinacea Numerous studies have tested the herb echinacea to see whether it can prevent colds or relieve cold symptoms. A 2014 systematic review concluded that echinacea has not been convincingly shown to be effective; however, a weak effect was not ruled out. Garlic & Other Herbs No strong evidence supports claims that garlic, Chinese herbs, oil of oregano, or eucalyptus essential oil prevent or treat colds, or that the homeopathic product Oscillococcinum prevents or treats influenza or influenza-like illness. Nasal Saline Irrigation Nasal saline irrigation (e.g., use of neti pots), can be useful and safe for chronic sinusitis. Nasal saline irrigation also can help relieve the symptoms of acute upper respiratory tract infections, but the evidence is not definitive. Even in places where tap water is safe to drink, people should use only sterile, distilled, boiled-then-cooled, or specially filtered water for nasal irrigation to avoid the risk of introducing waterborne pathogens. Probiotics Probiotics might reduce susceptibility to colds or other upper respiratory tract infections and the duration of the illnesses, but the quality of the evidence is low or very low. Vitamin C Taking vitamin C supplements regularly reduces the risk of catching a cold among people who perform intense physical exercise, but not in the general population. Taking vitamin C on a regular basis might lead to shorter-duration colds, but taking it only after cold symptoms appear does not. Vitamin C supplements appear to be safe, even at high doses. Zinc Zinc taken orally, often in the form of lozenges, within 24 hours of symptom onset might reduce the duration of a cold. No firm recommendation currently can be made, however, regarding prophylactic zinc supplementation because of insufficient data. When taken in large doses, side effects from zinc can include nausea and diarrhea, copper deficiency, and decreased absorption of some medications. Intranasal use of zinc can cause anosmia (loss of sense of smell), which can be long-lasting or permanent. Coronavirus Disease 2019 A variety of dietary supplements, including elderberry, melatonin, colloidal silver, vitamin C, vitamin D, and zinc have each been suggested to prevent or treat coronavirus disease 2019 (COVID-19). Except for colloidal silver (for which no plausible mechanism of action exists), the listed supplements have theoretical applications in preventing or treating COVID-19; evidence of efficacy from clinical trials is limited, however, and without clear demonstration of benefit. In addition, use of colloidal silver and zinc carries health and safety concerns. Colloidal silver (and other silver products) can cause argyria, a permanent blue-gray discoloration of the skin and other organs. High-dose supplementation with zinc can cause nausea and diarrhea, copper deficiency, and decreased absorption of some medications. The National Institutes of Health (NIH) COVID-19 Treatment Guidelines recommend against supplementation with zinc above the recommended dietary allowance because of these risks and the lack of evidence of clinical benefit. Homeopathic Vaccines Proponents of homeopathy claim that products called nosodes, or homeopathic vaccines, are effective substitutes for conventional immunizations. No credible scientific evidence or plausible scientific rationale supports these claims. For more information on travel vaccines, see Sec. 2, Ch. 3, Vaccination & Immunoprophylaxis—General Principles. Insect Repellents Many products are promoted as “natural” insect repellents, and their use can appeal to people who prefer not to use synthetic products. Products promoted as natural mosquito repellents include citronella products, neem oil (a component of agricultural insecticide products promoted on some websites for home use), and oil of lemon eucalyptus (OLE). Essential oils and other natural products are promoted to repel bed bugs. Travelers should use only Environmental Protection Agency (EPA)–registered insect repellents; more information is available at the EPA website. Botanicals Laboratory-based studies found that botanicals, including citronella products, worked for shorter periods than products containing DEET (N,N-diethyl-m-toluamide or N,N-diethyl-3-methyl-benzamide). For people who choose to use botanicals, the Centers for Disease Control and Prevention (CDC) recommends EPA-registered products containing OLE (oil of lemon eucalyptus). Limited evidence suggests that neem oil could be beneficial as a natural repellent. For more information on insect repellents, see Sec. 4, Ch. 6, Mosquitoes, Ticks & Other Arthropods). Bed Bug Repellents No evidence supports effectiveness of natural products marketed to repel bed bugs. Instead, encourage travelers to follow steps to detect and avoid bed bugs (e.g., inspecting mattresses, keeping their luggage off the floor or bed). More information is available at CDC’s Parasites website and in Section 4, Box 4-10, Recommended protective measures to avoid or reduce bed bug exposure. Aromatherapy Very little evidence supports the belief that aromatherapy or the herbs chamomile or valerian help with insomnia. Major side effects are uncommon, but chamomile can cause allergic reactions. Another herb, kava, also is promoted for sleep, but good research on its effectiveness is lacking. More importantly, kava supplements have been linked to a risk of severe liver damage. Melatonin Some evidence suggests that melatonin supplements can help with sleep problems caused by jet lag in people traveling either east or west. Melatonin is sold as a dietary supplement; dietary supplements are less strictly regulated than drugs. The amounts of ingredients in dietary supplements can vary, and product contamination is a potential concern. A 2017 analysis of melatonin supplements sold in Canada found that their actual melatonin content ranged from <83% to >478% of the labeled content and that substantial lot-to-lot variation was evident. Also, 26% of products contained serotonin as a contaminant. Melatonin supplements appear to be safe for most people who use them for discrete periods of time; an absence of studies examining the effects associated with continued use makes it challenging to know with certainty its long-term safety and tolerability. In a 2019 systematic review of mostly short-term trials of melatonin for sleep problems, the most frequently reported adverse events were daytime sleepiness (1.66%), dizziness (0.74%), headache (0.74%), other sleep-related adverse events (0.74%), and hypothermia (0.62%). Almost all adverse events were considered mild–moderate in severity and tended to resolve either spontaneously or after discontinuing treatment. Caution people with epilepsy or who take an oral anticoagulant against using melatonin without medical supervision. In addition, advise travelers not to take melatonin early in the day, because it can cause sleepiness and delay adaptation to local time. Relaxation Techniques Relaxation techniques (e.g., progressive relaxation and other mind and body practices, including mindfulness-based stress reduction) can help with insomnia, but their effectiveness for jet lag has not been established. Malaria Many consumer websites promote “natural” ways to prevent or treat malaria, which often involve dietary changes or herbal products (e.g., quinine from the cinchona tree [Cinchona spp.]) or extracts and material from the artemisia plant (Artemisia annua L. or sweet wormwood). Strongly urge patients to follow official recommendations, including the use of malaria chemoprophylaxis, and not to rely on unproven “natural” approaches to prevent or treat such a serious disease. Recommended drugs to prevent and treat malaria are described in Sec. 5, Part 3, Ch. 16, Malaria. Motion Sickness Acupressure & Magnets Research does not support the use of acupressure or magnets for motion sickness. Ginger Although some studies have shown that ginger might ease pregnancy-related nausea and vomiting, no strong evidence shows that it helps with motion sickness. In some people, ginger can have mild side effects (e.g., abdominal discomfort). Research has not definitively shown whether ginger interacts with medications, but concerns have been raised that it could interact with anticoagulants. The effect of using ginger supplements with common over-the-counter drugs for motion sickness (e.g., dimenhydrinate [Dramamine]) is unknown. Homeopathic Remedies Pyridoxine (Vitamin B6) Although an American Congress of Obstetrics and Gynecology 2015 Practice Bulletin Summary recommends pyridoxine (vitamin B6) alone or in combination with doxylamine (an antihistamine) as a safe and effective treatment for nausea and vomiting associated with pregnancy, no evidence supports claims that pyridoxine prevents or alleviates motion sickness. Taking excessive doses of pyridoxine supplements for long periods of time can affect nerve function. Sun Protection Many “natural sunscreen” products are promoted online, as are recipes for homemade sunscreen and advice on consuming dietary supplements or drinking teas to protect against sun damage. No studies have proven that any dietary supplement or herbal product, including aloe vera, beta carotene, epigallocatechin gallate (EGCG; a green tea extract), or selenium reduces the risk for skin cancer or sun damage. For more information, see Sec. 4, Ch. 1, Sun Exposure. Travelers’ Diarrhea A variety of products, including activated charcoal, goldenseal, grapefruit seed extract, and probiotics are claimed to prevent or treat travelers’ diarrhea (TD). Counsel travelers about food and water safety precautions. For more information, see Sec. 2, Ch. 8, Food & Water Precautions. Activated Charcoal No solid evidence supports claims that activated charcoal helps with TD, bloating, stomach cramps, or gas. The side effects of activated charcoal have not been well documented but were mild when it was tested on healthy people. Children should not be given activated charcoal for diarrhea and dehydration because it can absorb nutrients, enzymes, and antibiotics in the intestine and mask the severity of fluid loss. Goldenseal No high-quality research has been published on goldenseal for TD. Studies show that goldenseal inhibits cytochrome P450 enzymes, raising concerns that goldenseal might increase the toxicity or alter the effects of some drugs. Grapefruit Seed Extract Claims that grapefruit seed extract can prevent bacterial foodborne illnesses are not supported by research. People who need to avoid grapefruit because it interacts with medicine they are taking should also avoid grapefruit seed extract. Probiotics To date, insufficient evidence exists to draw definite conclusions about the efficacy of probiotics for the prevention of TD. Although some studies have had promising results, meta-analyses have reached conflicting conclusions. Interpretation of the evidence is difficult because studies have used a variety of microbial strains, some studies were not well controlled, and the optimal doses and duration of use have not been defined. For more information, see Sec. 2, Ch. 6, Travelers’ Diarrhea. Untested Therapies Used In Other Countries CDC does not recommend traveling to other countries for untested medical interventions or to buy medications that are not approved in the United States. For more information see the chapters in Section 6, Health Care Abroad. Talking To Travelers About Complementary Health Approaches Given the vast number of complementary or integrative interventions and the wealth of potentially misleading information about them that can be found on the internet, discussing the use of these approaches with patients can seem daunting. Be proactive, though, because surveys show that many patients are reluctant to raise the topic with health care providers. Federal agencies (e.g., the National Center for Complementary and Integrative Health [NCCIH]) offer evidence-based resources to help providers and their patients have meaningful discussions about complementary approaches. Acknowledgments The authors thank Mr. Philip Kibak of ICF for his editorial assistance. The following authors contributed to the previous version of this chapter: David Shurtleff, Kathleen Meister, Catherine Law
Although colds and flu are not uniquely travel-related hazards, many people try to avoid these illnesses during a trip. Complementary health approaches that have been advocated for preventing or treating colds or influenza include echinacea, garlic and other herbs, nasal saline irrigation, probiotics, vitamin C, zinc products, and others. Echinacea Numerous studies have tested the herb echinacea to see whether it can prevent colds or relieve cold symptoms. A 2014 systematic review concluded that echinacea has not been convincingly shown to be effective; however, a weak effect was not ruled out. Garlic & Other Herbs No strong evidence supports claims that garlic, Chinese herbs, oil of oregano, or eucalyptus essential oil prevent or treat colds, or that the homeopathic product Oscillococcinum prevents or treats influenza or influenza-like illness. Nasal Saline Irrigation Nasal saline irrigation (e.g., use of neti pots), can be useful and safe for chronic sinusitis. Nasal saline irrigation also can help relieve the symptoms of acute upper respiratory tract infections, but the evidence is not definitive. Even in places where tap water is safe to drink, people should use only sterile, distilled, boiled-then-cooled, or specially filtered water for nasal irrigation to avoid the risk of introducing waterborne pathogens. Probiotics Probiotics might reduce susceptibility to colds or other upper respiratory tract infections and the duration of the illnesses, but the quality of the evidence is low or very low. Vitamin C Taking vitamin C supplements regularly reduces the risk of catching a cold among people who perform intense physical exercise, but not in the general population. Taking vitamin C on a regular basis might lead to shorter-duration colds, but taking it only after cold symptoms appear does not. Vitamin C supplements appear to be safe, even at high doses.
no
Ethnobotany
Can Echinacea prevent colds?
yes_statement
"echinacea" can "prevent" "colds".. cold "prevention" is possible with "echinacea".
https://www.sciencedirect.com/science/article/abs/pii/S0965229918312585
Echinacea for the prevention and treatment of upper respiratory tract ...
Eligibility criteria Participants and interventions Participants who are otherwise healthy of any age and sex. We considered any echinacea containing preparation. Study appraisal and synthesis methods We used the Cochrane collaborations tool for quality assessment of included studies and performed three meta-analyses; on the prevention, duration and safety of echinacea. Results For the prevention of upper respiratory tract infection using echinacea we found a risk ratio of 0.78 [95% CI 0.68–0.88], for the treatment of upper respiratory tract infection using echinacea we found a mean difference in average duration of −0.45 [95% 1.85–0.94] days, finally for the safety meta-analyses we found a risk ratio of 1.09 [95% CI 0.95–1.25]. Limitations The limitations of our review include the clinical heterogeneity – for example many different preparations were tested, the risk of selective reporting, deviations from our protocol and lack of contact with study authors. Conclusions Our review presents evidence that echinacea might have a preventative effect on the incidence of upper respiratory tract infections but whether this effect is clinically meaningful is debatable. We did not find any evidence for an effect on the duration of upper respiratory tract infections. Regarding the safety of echinacea no risk is apparent in the short term at least. The strength of these conclusions is limited by the risk of selective reporting and methodological heterogeneity. Implications of key findings Based on the results of this review users of echinacea can be assured that echinacea preparations are safe to consume in the short term however they should not be confident that commercially available remedies are likely to shorten the duration or effectively prevent URTI. Researchers interested in the potential preventative effects of echinacea identified in this study should aim to increase the methodological strength of any further trials.
Eligibility criteria Participants and interventions Participants who are otherwise healthy of any age and sex. We considered any echinacea containing preparation. Study appraisal and synthesis methods We used the Cochrane collaborations tool for quality assessment of included studies and performed three meta-analyses; on the prevention, duration and safety of echinacea. Results For the prevention of upper respiratory tract infection using echinacea we found a risk ratio of 0.78 [95% CI 0.68–0.88], for the treatment of upper respiratory tract infection using echinacea we found a mean difference in average duration of −0.45 [95% 1.85–0.94] days, finally for the safety meta-analyses we found a risk ratio of 1.09 [95% CI 0.95–1.25]. Limitations The limitations of our review include the clinical heterogeneity – for example many different preparations were tested, the risk of selective reporting, deviations from our protocol and lack of contact with study authors. Conclusions Our review presents evidence that echinacea might have a preventative effect on the incidence of upper respiratory tract infections but whether this effect is clinically meaningful is debatable. We did not find any evidence for an effect on the duration of upper respiratory tract infections. Regarding the safety of echinacea no risk is apparent in the short term at least. The strength of these conclusions is limited by the risk of selective reporting and methodological heterogeneity. Implications of key findings Based on the results of this review users of echinacea can be assured that echinacea preparations are safe to consume in the short term however they should not be confident that commercially available remedies are likely to shorten the duration or effectively prevent URTI. Researchers interested in the potential preventative effects of echinacea identified in this study should aim to increase the methodological strength of any further trials.
yes
Ethnobotany
Can Echinacea prevent colds?
yes_statement
"echinacea" can "prevent" "colds".. cold "prevention" is possible with "echinacea".
https://www.nbcnews.com/health/health-news/echinacea-no-help-kids-study-says-flna1c9477929
Echinacea no help to kids, study says
Echinacea no help to kids, study says Echinacea failed to relieve children’s cold symptoms and appeared to cause skin rashes in some cases, a study of 407 youngsters found. It is one of the largest studies yet to question the benefits of the popular but unproven herbal remedy. With reported sales of more than $300 million annually, echinacea is one of the most widely used herbal remedies nationwide. Also known as the purple coneflower, echinacea is sold in a variety of over-the-counter preparations, including pills, drops and lozenges that are purported to boost the body’s disease-fighting immune system. Anecdotal reports and some animal studies suggest the herb can prevent and relieve respiratory infections, but human studies have had mixed results. The herb was not effective at treating colds in a small study of college students published last year. In the current study of 407 Seattle-area children ages 2 to 11, echinacea plant extract worked no better than a dummy preparation in reducing sneezing, runny noses and fever. “We did not find any group of children in whom echinacea appeared to have a positive benefit,” said the researchers, led by Dr. James Taylor of the University of Washington’s Child Health Institute. Study details Symptoms lasted an average of nine days in children given echinacea and in those taking the placebo, and the overall severity of symptoms were similar. Mild skin rashes occurred in 7 percent of colds treated with echinacea but in only 2.7 percent of colds treated with the dummy preparation. None of the rashes required medical treatment. The findings appear in Wednesday’s Journal of the American Medical Association. Healthy patients were enrolled and followed for four months. At the outset, parents were instructed to call the researchers when their children developed at least two cold symptoms. Parents then were asked to start administering treatment. That lag time may explain why no benefits were found, said Mark Blumenthal, executive director of the American Botanical Council, an independent group that studies herbs. He said echinacea is thought to work best if taken as soon as the first symptoms appear. Some of the children had multiple colds during the study, but there were 33 fewer colds in the echinacea group — results Blumenthal said suggest that echinacea might have helped prevent subsequent colds. Taylor called those results could be just a fluke. The study was not designed to examine prevention. Blumenthal said the rashes that developed may have been a rare side effect from pollen in the echinacea plant flower. The echinacea used in the study was made by the German company Madaus AG and contained extract mostly from the flower. Blumenthal said many echinacea products are made instead from the root. Jim Bruce, president of Madaus’ United States-based subsidiary, said numerous previous studies showed the product to be effective at preventing and treating colds.
Echinacea no help to kids, study says Echinacea failed to relieve children’s cold symptoms and appeared to cause skin rashes in some cases, a study of 407 youngsters found. It is one of the largest studies yet to question the benefits of the popular but unproven herbal remedy. With reported sales of more than $300 million annually, echinacea is one of the most widely used herbal remedies nationwide. Also known as the purple coneflower, echinacea is sold in a variety of over-the-counter preparations, including pills, drops and lozenges that are purported to boost the body’s disease-fighting immune system. Anecdotal reports and some animal studies suggest the herb can prevent and relieve respiratory infections, but human studies have had mixed results. The herb was not effective at treating colds in a small study of college students published last year. In the current study of 407 Seattle-area children ages 2 to 11, echinacea plant extract worked no better than a dummy preparation in reducing sneezing, runny noses and fever. “We did not find any group of children in whom echinacea appeared to have a positive benefit,” said the researchers, led by Dr. James Taylor of the University of Washington’s Child Health Institute. Study details Symptoms lasted an average of nine days in children given echinacea and in those taking the placebo, and the overall severity of symptoms were similar. Mild skin rashes occurred in 7 percent of colds treated with echinacea but in only 2.7 percent of colds treated with the dummy preparation. None of the rashes required medical treatment. The findings appear in Wednesday’s Journal of the American Medical Association. Healthy patients were enrolled and followed for four months. At the outset, parents were instructed to call the researchers when their children developed at least two cold symptoms. Parents then were asked to start administering treatment.
no
Ethnobotany
Can Echinacea prevent colds?
yes_statement
"echinacea" can "prevent" "colds".. cold "prevention" is possible with "echinacea".
https://www.wnyurology.com/content.aspx?chunkiid=21677
Echinacea - Western New York Urology Associates, LLC
Probably Not Effective Uses The decorative plant Echinacea purpurea, or purple coneflower, has been one of the most popular herbal medications in both the United States and Europe for over a century. Native Americans used the related species Echinacea angustifolia for a wide variety of problems, including respiratory infections and snakebite. Herbal physicians among the European colonists quickly added the herb to their repertoire. Echinacea became tremendously popular toward the end of the nineteenth century, when a businessman named H.C.F. Meyer promoted an herbal concoction containing E. angustifolia. The garish, exaggerated, and poorly written nature of his labeling helped define the characteristics of a "snake oil" remedy. However, serious manufacturers developed an interest in echinacea as well. By 1920, the respected Lloyd Brothers Pharmaceutical Company of Cincinnati, Ohio, counted echinacea as its largest-selling product. In Europe, physicians took up the American interest in E. angustifolia with enthusiasm. Demand soon outstripped the supply coming from America, and, in an attempt to rapidly plant echinacea locally, the German firm Madeus and Company mistakenly purchased a quantity of Echinacea purpurea seeds. This historical accident is the reason why most echinacea today belongs to the purpurea species instead of angustifolia. Another family member, Echinacea pallida, is also used. Echinacea was the number one cold and flu remedy in the United States until it was displaced by sulfa antibiotics. Ironically, antibiotics are not effective for colds, while echinacea appears to offer some real help. Echinacea remains the primary remedy for minor respiratory infections in Germany, where over 1.3 million prescriptions are issued each year. P1 In Europe, and increasingly in the US as well, echinacea products are widely used to treat colds and flus. The best scientific evidence about echinacea concerns its ability to help you recover from colds and minor flus more quickly. The old saying goes that "a cold lasts 7 days, but if you treat it, it will be over in a week." However, good, if not entirely consistent, evidence tells us that echinacea can actually help you get over colds much faster.9-19,40 It also appears to significantly reduce symptoms while you are sick. Echinacea may also be able to "abort" a cold, if taken at the first sign of symptoms. However, taking echinacea regularly throughout cold season is probably not a great idea. Evidence suggests that it does not help prevent colds.20,21,23,24 Until recently, it was believed that echinacea acted by stimulating the immune system. Test tube and animal studies had found that various constituents of echinacea can increase antibody production, raise white blood cell counts, and stimulate the activity of key white blood cells.1-6 However, most recent studies have tended to cast doubt on this theory.7,37,41,42,56-57 The fact that regular use of echinacea does not appear to help prevent colds (or genital herpes 8) also somewhat argues against an immune-strengthening effect. Thus, at present, it can only be said that we don’t understand the means by which echinacea affects cold symptoms. Echinacea has been proposed for the treatment and/or prevention of other acute infections as well. One small double-blind study found that use of an herbal combination containing echinacea enhanced the effectiveness of antibiotic treatment for acute flare-ups of chronic bronchitis.43 However, two other studies failed to find benefit for ear infections in children.44,64 Finally, echinacea is frequently proposed for general immune support. However, as discussed above there is some reason to think that it is not effective for this purpose. P2 Reducing the symptoms and duration of colds Double-blind, placebo-controlled studies enrolling a total of more than 1,000 individuals have found that various forms and species of echinacea can reduce cold symptoms and help you get over a cold faster.9-16,45, 56 The best evidence regards products that include the above-ground portion of E. purpurea.58 For example, in one double-blind, placebo-controlled trial, 80 individuals with early cold symptoms were given either an above-ground E. purpurea extract or placebo.17 The results showed that the people who were given echinacea recovered significantly more quickly: just 6 days in the echinacea group versus 9 days in the placebo group. And, symptom reduction with a whole plant formulation of E. purpurea was seen in a double-blind, placebo-controlled study of 282 people.45 But, another study found that while above-ground E. purpurea can reduce the severity of cold symptoms, the root portion may not be effective. In this double-blind trial, 246 individuals with recent onset of a respiratory infection were given either placebo or one of three E. purpurea preparations: two formulations of a product made of 95% above-ground herb (leaves, stems, and flowers) and 5% root, and one made only from the roots of the plant.18 The results showed significant improvements in symptoms with the above-ground preparations, but the root preparation was not effective. And, in a large, randomized study, researchers found that dried echinacea root (10.2 grams for the first 24 hours of a cold and 5.1 grams for the next 4 days) did not improve symptoms more than placebo or no treatment.65 Not all research involving above-ground E. purpurea, however, has supported its beneficial effects. A double-blind, placebo-controlled study of the above-ground herb, enrolling 120 people, failed to find benefits compared to placebo treatment.46 And an even larger trial (407 participants) failed to find a widely used above-ground extract helpful for treating children with respiratory infections.47 Researchers have also investigated other species of echinacea with mixed results. Benefits were seen with a preparation of E. pallida root 38 and with an herbal beverage tea containing above-ground portions of E. purpurea and E. angustifolia (as well as some E. purpurea root extract).39 On the other hand, a double-blind, placebo-controlled study failed to find benefit with a dry herb product consisting largely of E. purpurea root and E. angustifolia root.40 And, another study failed to find benefit with E. angustifolia root extract.59 The bottom line: at present, the best supporting evidence for echinacea involves the above-ground portion or whole plant extract of E. purpurea, but even here the results are inconsistent. "aborting" a cold A double-blind study suggests that echinacea cannot only make colds shorter and less severe, it might also be able to stop a cold that is just starting.19 In this study, 120 people were given E. purpurea or a placebo as soon as they started showing signs of getting a cold. Participants took either echinacea or placebo at a dosage of 20 drops every 2 hours for 1 day, then 20 drops 3 times a day for a total of up to 10 days of treatment. The results were promising. Fewer people in the echinacea group felt that their initial symptoms actually developed into "real" colds (40% of those taking echinacea versus 60% taking the placebo actually became ill). Also, among those who did come down with "real" colds, improvement in the symptoms started sooner in the echinacea group (4 days instead of 8 days). Both of these results were statistically significant. Preventing colds Several studies have attempted to discover whether the daily use of echinacea can prevent colds from even starting, but the results have not been promising. In one double-blind, placebo-controlled trial, 302 healthy volunteers were given an alcohol tincture containing either E. purpurea root, E. angustifolia root, or placebo for 12 weeks.20 The results showed that E. purpurea was associated with perhaps a 20% decrease in the number of people who got sick, and E. angustifolia with a 10% decrease. However, the difference was not statistically significant. This means that the benefit, if any, was so small that it could have been due to chance alone. Another double-blind, placebo-controlled study enrolled 109 individuals with a history of four or more colds during the previous year, and gave them either E. purpurea juice or placebo for a period of 8 weeks.21 No benefits were seen in the frequency, duration, or severity of colds. (Note: This paper is actually a more detailed look at a 1992 study widely misreported as providing evidence of benefit.22) Similar results were seen in four other studies as well, enrolling a total of more than 350 individuals.23,24,48,62 A study often cited as evidence that echinacea can prevent colds actually found no benefit in the 609 participants taken as a whole.25 Only by looking at subgroups of participants (a statistically questionable procedure) could researchers find any evidence of benefit, and it was still slight. However, a recent study using a combination product containing echinacea, propolis, and vitamin C did find preventive benefits.49 In this double-blind, placebo-controlled study, 430 children age 1 to 5 years were given either the combination or placebo for 3 months during the winter. The results showed a statistically significant reduction in frequency of respiratory infections. It is not clear which components of this mixture were responsible for the apparent benefits seen. P3 Echinacea is usually taken at the first sign of a cold and continued for 7 to 14 days. Longer-term use of echinacea is not recommended. The best (though not entirely consistent) evidence supports the use of products made from the above-ground portions of E. purpurea (specifically, flowers, leaves and stems); E. pallida root has also shown promise, but E. purpurea root appears to be ineffective. The typical dosage of echinacea powdered extract is 300 mg 3 times a day. Alcohol tincture (1:5) is usually taken at a dosage of 3 to 4 ml 3 times daily, echinacea juice at a dosage of 2 to 3 ml 3 times daily, and whole dried root at 1 to 2 g 3 times daily. There is no broad agreement on what ingredients should be standardized in echinacea tinctures and solid extracts. Note: A survey of available echinacea products found many problems.50 In this 2003 analysis, about 10% had no echinacea at all; about half were mislabeled as to the species of echinacea present; more than half the standardized preparations did not contain the labeled amount of standardized constituents; and the total milligrams of echinacea stated on the label generally had little to do with the actual milligrams of herb present. A subsequent analysis performed in 2004 by the respected testing organization, ConsumerLab.com, also found many problems.60 Many herbalists feel that liquid forms of echinacea are more effective than tablets or capsules, because they feel that part of echinacea's benefit is due to activation of the tonsils through direct contact.26 However, there is no real evidence to support this contention. Finally, goldenseal is frequently combined with echinacea in cold preparations. However, there is not a shred of evidence that oral goldenseal stimulates immunity, nor did traditional herbalists use it for this purpose.27 P4 Echinacea appears to be generally safe. Even when taken in very high doses, it has not been found to cause any toxic effects.29,51,52,53 Reported side effects are also uncommon and usually limited to minor gastrointestinal symptoms, increased urination, and mild allergic reactions.30 However, severe allergic reactions have occurred occasionally, some of them life threatening.31 In Australia, one survey found that 20% of allergy-prone individuals were allergic to echinacea. Other concerns relate to echinacea’s possible immune-stimulating properties. Immunity is a two-edged sword that the body keeps under careful control; excessively strong immune reactions can be dangerous. Based on this concern, echinacea should be used only with caution (if at all) by individuals with autoimmune disorders, such as multiple sclerosis, lupus, and rheumatoid arthritis. Furthermore, a recent case report strongly suggests that use of echinacea can trigger episodes of erythema nodosum (EN).36 EN is an inflammatory condition that involves tender nodules under the skin. These nodules often arise after cold-like symptoms. In this report, a 41-year-old man took echinacea on four separate occasions when he thought he was developing a cold, and each time he developed EN instead. When he stopped using echinacea for this purpose, he remained free of EN outbreaks for a full year of follow-up. The cause of EN is not known, but it involves increased activity of certain immune cells; echinacea has been observed to cause similar effects in the same immune cells, suggesting that the relationship is not coincidental. One study raised questions about possible antifertility effects of echinacea.32 When high concentrations of echinacea were placed in a test tube with hamster sperm and ova, the sperm were less able to penetrate the ova. However, since we have no idea whether this much echinacea can actually come in contact with sperm and ova when they are in the body rather than a test tube, these results may not be meaningful in real life. Animal studies of echinacea are supportive of safety in pregnancy.51,52,54,55 One human study found a bit of evidence that use of echinacea during pregnancy does not increase risk of birth defects, but this evidence is not strong enough to absolutely rely on.33 Furthermore, studies dating back to the 1950s suggest that echinacea is safe in children.34 Nonetheless, the safety of echinacea in young children or pregnant or nursing women cannot be regarded as established. In addition, safety in those with severe liver or kidney disease has also not been established. Two studies suggest that echinacea might interact with various medications by affecting their metabolism in the liver, but the significance of these largely theoretical findings remain unclear.35, 61 A review of the research literature found no verifiable reports of drug-herb interactions with any echinacea product.63 Vomel T. The effect of a nonspecific immunostimulant on the phagocytosis of erythrocytes and ink by the reticulohistiocyte system in the isolated, perfused liver of rats of various ages [in German; English abstract]. Arzneimittelforschung. 1984;34:691-695. Grimm W, Muller H. A randomized controlled trial of the effect of fluid extract of Echinacea purpurea on the incidence and severity of colds and respiratory infections. Am J Med. 1999;106:138-143. Schoneberger D. The influence of immune-stimulating effects of pressed juice from Echinacea purpurea on the course and severity of colds. (Results of a double-blind study) [translated from German]. Forum Immunol. 1992;8:2-12. Please be aware that this information is provided to supplement the care provided by your physician. It is neither intended nor implied to be a substitute for professional medical advice. CALL YOUR HEALTHCARE PROVIDER IMMEDIATELY IF YOU THINK YOU MAY HAVE A MEDICAL EMERGENCY. Always seek the advice of your physician or other qualified health provider prior to starting any new treatment or with any questions you may have regarding a medical condition.
more quickly. The old saying goes that "a cold lasts 7 days, but if you treat it, it will be over in a week." However, good, if not entirely consistent, evidence tells us that echinacea can actually help you get over colds much faster.9-19,40 It also appears to significantly reduce symptoms while you are sick. Echinacea may also be able to "abort" a cold, if taken at the first sign of symptoms. However, taking echinacea regularly throughout cold season is probably not a great idea. Evidence suggests that it does not help prevent colds.20,21,23,24 Until recently, it was believed that echinacea acted by stimulating the immune system. Test tube and animal studies had found that various constituents of echinacea can increase antibody production, raise white blood cell counts, and stimulate the activity of key white blood cells.1-6 However, most recent studies have tended to cast doubt on this theory.7,37,41,42,56-57 The fact that regular use of echinacea does not appear to help prevent colds (or genital herpes 8) also somewhat argues against an immune-strengthening effect. Thus, at present, it can only be said that we don’t understand the means by which echinacea affects cold symptoms. Echinacea has been proposed for the treatment and/or prevention of other acute infections as well. One small double-blind study found that use of an herbal combination containing echinacea enhanced the effectiveness of antibiotic treatment for acute flare-ups of chronic bronchitis.43 However, two other studies failed to find benefit for ear infections in children.44,64 Finally, echinacea is frequently proposed for general immune support. However, as discussed above there is some reason to think that it is not effective for this purpose.
no
Biotechnology
Can Genetically Modified Crops Promote Biodiversity?
yes_statement
"genetically" modified crops can promote biodiversity. biodiversity can be promoted by "genetically" modified crops
https://www.theatlantic.com/health/archive/2011/03/the-battle-for-biodiversity-monsanto-and-farmers-clash/73117/
The Battle for Biodiversity: Monsanto and Farmers Clash - The Atlantic
The Battle for Biodiversity: Monsanto and Farmers Clash Does genetic modification lead to more and better crops? Or will it destroy the foundations of our food systems? French farmers and activists reap what they called an "illegal" plot of genetically modified rapeseed developed by the agribusiness company Monsanto. Robert Pratta/Reuters Two weeks ago, Monsanto announced the latest genetically engineered crop it hopes to bring to market: a soybean rejiggered to resist the herbicide dicamba. The new product, says Monsanto, will aid in weed control and "deliver peace of mind for growers." Meanwhile, half a world away, La Via Campesina, a farmers' movement of 150 organizations from 70 countries, had a slightly different idea about what would bring peace of mind to its millions of members: protecting biodiversity. In its statement to those gathered in Bali for the United Nations treaty on plant genetics, the organization urged treaty drafters to reevaluate the legal framework that allows seed patenting and the spread of genetically engineered crops, like those Monsanto soybeans. These genetically modified crops and the international patent regime, La Via Campesina said, block farmers' ability to save and share seeds, threatening biodiversity and food security. In 2004, half of global seed sales were controlled by 10 companies. Today, those companies control nearly three-quarters of sales. Monsanto and La Via Campesina represent two distinct worldviews. According to Monsanto and other chemical and seed giants like Syngenta, BASF, and Dupont, corporate control of seeds and relaxed laws for biotech promotion spur innovation and productivity. That may sound good, but La Via Campesina and many other groups around the world look at the real-world effects of 20 years of patent approvals and the spread of biotech crops. These critics argue that corporate power over seeds has actually undermined biodiversity and food-system resilience. This debate is significant. Which side we listen to will largely determine just how well we can continue to feed the planet, especially as we contend with ever greater weather extremes brought on by global warming when crop resilience will be paramount. Since the 1980 Diamond v. Chakrabarty Supreme Court decision, companies in the U.S. have been able to patent life forms, including seeds. In Europe, since 1999, nearly 1,000 patents on animals and 1,500 on plants have been approved; thousands more are pending, and not just for genetically engineered crops, but for conventional ones, too. Monsanto and Syngenta alone have filed patents for dozens of conventional vegetables, including tomatoes, sweet peppers, and melons. This means tightening control on how and where certain crops can be planted and even whether certain seed lines are continued—or exterminated. In contrast to what we hear from Monsanto, patents actually restrict innovation, as researchers can no longer freely use patented plants in breeding experimentation. Increasing market concentration in seed ownership has also destroyed true market competition. In 2004, half of global seed sales were controlled by 10 companies. Today, those companies control nearly three-quarters of sales. This concentration has led to higher prices and shrinking choice for consumers. Add to this corporate consolidation the spread of biotech crops and you see why biodiversity is becoming so threatened. Biotech crops, like other industrial crops, are monocultures, with single varieties planted over millions of acres and sprayed with chemicals. Despite promises about wonder crops that would end Vitamin A deficiency or withstand drought, nearly all commercially available genetically modified foods are just one of two types, designed either to withstand a specific pesticide or to include a built-in pesticide. Fifty percent of all biotech crops planted worldwide are soybeans. Three countries--the United States, Brazil, and Argentina--grow 77 percent of all genetically modified crops, nearly all destined for livestock, not the world's hungry. Biotech crops also affect biodiversity in ways that "traditional" industrial crops don't: by risking the genetic integrity of cultivated and wild plants. In a 2006 report, Doug Gurian-Sherman, now with the Union of Concerned Scientists, explained: "Genetic engineering ups the ante when it comes to the potential for harm to wildlife from gene flow, because organisms in natural ecosystems have not adapted to many of the genes used in field trials. With the recent approval of genetically engineered alfalfa in the United States, organic farmers here are ever more concerned about such a "genetic trespass." Among biodiversity's many benefits is that it provides a reservoir of potentially essential genetic material, varieties that might be found to be more resilient in the face of more droughts and floods, for instance. Says Jack Heinemann, a professor of molecular biology at New Zealand's University of Canterbury Heinemann, "If we jeopardize this biodiversity for the sake of a possiblewonder trait for tomorrow, then we won't have any wonder traits for the day after tomorrow." That's not what the biotech industry is saying. Instead, Monsanto, the world's leading manufacturer of genetically modified foods, is spending millions on a PR campaign to convince the public that its technology will be vital to meeting the world's growing food demands. In early 2009, Monsanto's biotechnology chief, Steve Padgette, claimed that new crops like its forthcoming drought-resistant corn "will reset the bar for on-farm productivity." Never mind that experts in the field say engineering drought resistance is many years off—if even possible—and that biotech crops have not delivered consistently greater yields. The International Assessment of Agricultural Knowledge, Science and Technology for Development, a multi-year study contributed to by more than 600 experts from around the world, concluded that the benefits of agricultural biotechnology "is anecdotal and contradictory, and uncertainty about possible benefits and damage is unavoidable." Meanwhile, agricultural projects from around the world—especially in drought-stricken parts of East Africa—are showing the incredible potential of sustainable farming practices. The introduction of agroecological techniques on smallholder plots in hundreds of projects throughout Africa studied by England's University of Essex brought an increase in crop yields of an average of 116 percent. As a means for improving resiliency and sustainability within the global food chain, agroecology is now supported by a "wide range of experts within the scientific community," said Olivier de Schutter, the United Nations Special Rapporteur on the right to food. Back in Bali, La Via Campesina described its farmer members as being in the midst of a "war for control over seeds." Strong language, yes. But if we don't heed the organization's call for stricter regulation of the biotech and seed industry, biodiversity may just become collateral damage.
Today, those companies control nearly three-quarters of sales. This concentration has led to higher prices and shrinking choice for consumers. Add to this corporate consolidation the spread of biotech crops and you see why biodiversity is becoming so threatened. Biotech crops, like other industrial crops, are monocultures, with single varieties planted over millions of acres and sprayed with chemicals. Despite promises about wonder crops that would end Vitamin A deficiency or withstand drought, nearly all commercially available genetically modified foods are just one of two types, designed either to withstand a specific pesticide or to include a built-in pesticide. Fifty percent of all biotech crops planted worldwide are soybeans. Three countries--the United States, Brazil, and Argentina--grow 77 percent of all genetically modified crops, nearly all destined for livestock, not the world's hungry. Biotech crops also affect biodiversity in ways that "traditional" industrial crops don't: by risking the genetic integrity of cultivated and wild plants. In a 2006 report, Doug Gurian-Sherman, now with the Union of Concerned Scientists, explained: "Genetic engineering ups the ante when it comes to the potential for harm to wildlife from gene flow, because organisms in natural ecosystems have not adapted to many of the genes used in field trials. With the recent approval of genetically engineered alfalfa in the United States, organic farmers here are ever more concerned about such a "genetic trespass. " Among biodiversity's many benefits is that it provides a reservoir of potentially essential genetic material, varieties that might be found to be more resilient in the face of more droughts and floods, for instance. Says Jack Heinemann, a professor of molecular biology at New Zealand's University of Canterbury Heinemann, "If we jeopardize this biodiversity for the sake of a possiblewonder trait for tomorrow, then we won't have any wonder traits for the day after tomorrow. " That's not what the biotech industry is saying.
no
Biotechnology
Can Genetically Modified Crops Promote Biodiversity?
yes_statement
"genetically" modified crops can promote biodiversity. biodiversity can be promoted by "genetically" modified crops
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5250645/
Herbicide resistance and biodiversity: agronomic and environmental ...
Share RESOURCES As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health. Learn more: PMC Disclaimer | PMC Copyright Notice Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Abstract Farmland biodiversity is an important characteristic when assessing sustainability of agricultural practices and is of major international concern. Scientific data indicate that agricultural intensification and pesticide use are among the main drivers of biodiversity loss. The analysed data and experiences do not support statements that herbicide-resistant crops provide consistently better yields than conventional crops or reduce herbicide amounts. They rather show that the adoption of herbicide-resistant crops impacts agronomy, agricultural practice, and weed management and contributes to biodiversity loss in several ways: (i) many studies show that glyphosate-based herbicides, which were commonly regarded as less harmful, are toxic to a range of aquatic organisms and adversely affect the soil and intestinal microflora and plant disease resistance; the increased use of 2,4-D or dicamba, linked to new herbicide-resistant crops, causes special concerns. (ii) The adoption of herbicide-resistant crops has reduced crop rotation and favoured weed management that is solely based on the use of herbicides. (iii) Continuous herbicide resistance cropping and the intensive use of glyphosate over the last 20 years have led to the appearance of at least 34 glyphosate-resistant weed species worldwide. Although recommended for many years, farmers did not counter resistance development in weeds by integrated weed management, but continued to rely on herbicides as sole measure. Despite occurrence of widespread resistance in weeds to other herbicides, industry rather develops transgenic crops with additional herbicide resistance genes. (iv) Agricultural management based on broad-spectrum herbicides as in herbicide-resistant crops further decreases diversity and abundance of wild plants and impacts arthropod fauna and other farmland animals. Taken together, adverse impacts of herbicide-resistant crops on biodiversity, when widely adopted, should be expected and are indeed very hard to avoid. For that reason, and in order to comply with international agreements to protect and enhance biodiversity, agriculture needs to focus on practices that are more environmentally friendly, including an overall reduction in pesticide use. (Pesticides are used for agricultural as well non-agricultural purposes. Most commonly they are used as plant protection products and regarded as a synonym for it and so also in this text.) Electronic supplementary material The online version of this article (doi:10.1186/s12302-016-0100-y) contains supplementary material, which is available to authorized users. Preliminary remark Together with the supplement, the present paper is a summary and an update of a comprehensive technical report which was previously published by the German Federal Agency for Nature Conservation BfN, the Austrian Environment Agency EAA, and the Swiss Federal Office for the Environment FOEN [1]. Based on this technical report (see Additional file 1), some members of the Interest Group GMO within the EPA and ENCA networks,1 drafted a position paper which highlights key messages regarding the environmental impacts of the cultivation of genetically modified herbicide-resistant plants [2, 3]. Acting upon the key messages should improve the current environmental risk assessment of these plants. The position paper was recently addressed to relevant EU bodies with the aim to ensure adequate protection of the environment in the future. Most of the members of the IG GMO within the EPA and ENCA networks are involved in the risk assessment of GMOs in the EU and other European countries. Hence, the group consists of agencies responsible for the authorization of GMO releases as well as public institutions that provide scientific support to national administrations, e.g. as regards risk assessment. This paper summarizes the lessons learned from the experience with the use of GM plants resistant to the herbicides glyphosate and glufosinate. It is based on a more detailed paper that can be accessed as a supplement to this article. Ongoing discussions about the food and feed safety of GM crops and the concept of substantial equivalence are not in the realm of this paper. Throughout this document, the terms “herbicide resistance” and “herbicide tolerance” are used as defined by the Weed Science Society of America [4]; both terms are not used synonymously with respect to a particular response to a herbicide; they rather distinguish naturally occurring “tolerance” from engineered “resistance”. Review Agreements and regulations covering biodiversity protection Conservation of biodiversity is high on the agenda of international and national environmental policies though not very present in public awareness. The need to protect biodiversity and stop the loss was acknowledged in the Convention on Biological Diversity (CBD), internationally agreed on in 1992, and underscored by relevant decisions since then2 (the Convention entered into force in 1993). The Cartagena Protocol on Biosafety (CPB), adopted by the Parties to the CBD in 2000 and entering into force in 2003, seeks to protect biological diversity from potential risks posed by living modified organisms (LMOs), specially focusing on transboundary movement. Moreover, the CPB aims to facilitate information exchange on LMOs and procedures to ensure that countries can make informed decisions before they agree to import LMOs. Actually, 195 nations plus the EU are Parties to the CBD and 169 plus the EU to the Cartagena Protocol. In the EU, the deliberate release into the environment of genetically modified organisms (GMOs) is regulated by the Directive 2001/18/EC and the Directive (EU) 2015/412. Referring to the precautionary principle, the Directive 2001/18/EC aims at the protection of human and animal health and the environment. In the course of the environmental risk assessment, intended and unintended as well as cumulative long-term effects relevant to the release and the placing on the market of GMOs have to be considered comprehensively. Most commercially planted genetically modified (GM) crops are either herbicide-resistant (HR) or insect-resistant (IR), many carrying both traits. Based on recent data and experience, there are concerns that HR crops promote the further intensification of farming and may therefore increase pressure on biodiversity. Herbicide-resistant crops Herbicide resistance is the predominant trait of cultivated GM crops and will remain so in the near future. GM crops resistant to the broad-spectrum herbicides glyphosate and glufosinate have first been cultivated commercially in the 1990s [5], and GM crops with resistance to other herbicides are under development [6], or already on the market, with various HR traits increasingly combined in one crop [7]. Another, more recent strategy is the development of plants that are resistant to high concentrations of glyphosate without exhibiting a yield drag [8, 9]. Glyphosate inhibits 5-enolpyruvylshikimate-3-phosphate synthase (EPSPS), an enzyme of the shikimate pathway for biosynthesis of aromatic amino acids and phenolics in plants and microorganisms. This enzyme is not present in human or animal cells [10]. Glufosinate ammonium is an equimolar, racemic mixture of the d- and l-isomers of phosphinothricin (PPT). The l-isomer inhibits plant glutamine synthetase, leading to the accumulation of lethal levels of ammonia [11]. To confer resistance to glyphosate, most glyphosate-resistant crops express a glyphosate-insensitive EPSPS derived from Agrobacterium spp., some also the glyphosate-degrading enzyme glyphosate oxidoreductase (GOX) and/or the enzyme glyphosate acetyltransferase (GAT) that modifies glyphosate. In addition, various crops have also been transformed with one of the two bacterial genes pat or bar from Streptomyces spp. conferring resistance to glufosinate-based herbicides. These genes encode the enzyme phosphinothricin acetyl transferase (PAT) which detoxifies l-PPT. Other transgenes contained in HR crops confer resistance to ALS inhibitors3 (gm-hra gene), 2,4-D4 (aad-1 and aad-12 genes) or to dicamba (dmo gene). While many transgenic HR crop species have been tested in the field, only four are widely grown commercially since the late 1990s: soybean, maize, cotton, and canola [12]. In 2013, of the 175.2 million ha global GM crop area, about 57% (99.4 million ha) were planted with HR varieties and another 27% (47 million ha) with stacked HR/IR crops [13]. Hence, 84% of the GM crops carried HR genes (146.4 million ha). HR soybean is the dominant GM crop and grown mainly in North and South America, making up about 80% of the global soybean area and 46% of the total GM crop area [12]. In GM maize and GM cotton, HR traits are often combined with IR genes. In the US, HR crops such as alfalfa, sugar beet, creeping bentgrass, and rice, are already deregulated and on the market or pending for deregulation [7]. Yields of HR crops Contrary to widespread assumptions, HR crops do not provide consistently better yields than conventional crops. Increased yield is not the main reason for farmers to adopt HR crops. If there are yield differences between HR and conventional crops, they may be due to various factors, such as scale and region of growing, site and size of farms, soil, climate, tillage system, weed abundance, genetic background/varieties, crop management, weed control practice, farmer skills, and the education of the farm operators. Reviewing data about the agronomic performance of GM crops, Areal et al. [14] concluded that although GM crops, in general, perform better than conventional counterparts in agronomic and economic (gross margin) terms, results on the yield performance of HR crops vary. A consistent yield advantage for HR crops over conventional systems could not be demonstrated [15–17]. The actual yield reduction in RoundupReady soybean observed in some studies [15] might be due to several causes: (i) the present resistance gene in the first generation of RoundupReady line (40-3-2) [18] and (ii) reduced nodular nitrogen fixation upon glyphosate application [19] and/or (iii) a weaker defence response [20]. Application of glyphosate seemed to affect nodule number and mass which have been correlated with nitrogen fixation [21] and cause the symptom of “yellow flashing” which leads to a decrease in grain yield (see discussion in [9]). The second generation RR2Y soybean (MON 89788) was introduced to provide better yields, but when tested in the greenhouse, different cultivars of RR2Y performed less well than RR 40-3-2 [22]. Eco-toxicological attributes of complementary herbicides Impacts of HR crops on biodiversity are possible through the altered herbicide management option, that is, application of a broad-spectrum herbicide during crop growth and its impacts on weed abundance and diversity. These impacts, also called indirect effects, are dealt with later in this text. Direct impacts relate to the toxicity of the herbicide, of residues, and breakdown products. First, an update of eco-toxicological attributes and direct effects of relevant complementary herbicides of HR crops is given. Glyphosate Glyphosate (C3H8NO5P; N-(phosphonomethyl) glycine), a polar, water soluble organic acid, is a potent chelator that easily binds divalent cations (e.g. Ca, Mg, Mn, and Fe) and forms stable complexes [23]. In addition to the active ingredient (a.i.) that can be present in various concentrations, herbicides usually contain adjuvants or surfactants that facilitate penetration of the active ingredient through the waxy surfaces of the treated plants. The best known glyphosate containing herbicides, the Roundup product line, often contain as a surfactant polyethoxylated tallow amine (POEA), a complex mixture of di-ethoxylates of tallow amines characterized by their oxide/tallow amine ratio, that is significantly more toxic than glyphosate [24]. The toxicity of formulations to human cells varies considerably, depending on the concentration (and homologue) of POEA [25]. Data from toxicity studies performed with glyphosate alone and over short periods of time may thus conceal adverse effects of the herbicides. Glyphosate degradation is reported to be rapid (half-lives up to 130 days) [3], but its main metabolite aminomethylphosphonic acid (AMPA) degrades more slowly. Both substances are frequently and widely found in US soils, surface water, groundwater, and precipitation [26]. Recently, the widespread occurrence of POEA and the persistence of POEA homologues in US agricultural soils have been reported [27] with currently unknown and unexplored consequences. Inhibition of the enzyme EPSPS and disruption of the shikimate pathway impacts protein synthesis and production of phenolics, including defence molecules, lignin derivatives, and salicylic acid [28]. Glyphosate impacts plant uptake and transport of micronutrients (e.g. Mn, Fe, Cu, and Zn) whose undersupply can reduce disease resistance and plant growth [20, 23]. In Argentine soils, residue levels of up to 1500 µg/kg (1.5 ppm) glyphosate and 2250 µg/kg (2.25 ppm) AMPA have been found [29]. Glyphosate affects the composition of the microflora in soil and gastrointestinal tracts differently, suppressing some microorganisms and favouring others [30, 31]. This is likely linked to varying sensitivities of bacterial EPSPS enzymes to glyphosate [32]. In the RoundupReady soybean system, the bacterial-dependent nitrogen fixation and/or assimilation can be reduced [33]. Impacts of glyphosate on fungi vary also, depending on study sites, species, pathogen inoculum, timing of herbicide application, soil properties, and tillage [28]. Mycorrhizal fungi seem to be sensitive to glyphosate [34], while others, including pathogenic Fusarium fungi, may be favoured under certain conditions since glyphosate may serve as nutrient and energy source [30]. The microbial community of the gastrointestinal tract of animals and humans may be severely affected, if, as reported by Shehata et al. for poultry microbiota in vitro [31], pathogenic bacteria (e.g. Salmonella and Clostridium) are less sensitive to glyphosate than beneficial bacteria, e.g. lactic acid bacteria. For this reason, studies on glyphosate effects on the gut microbiome of other species are needed. Glyphosate-based herbicides can affect aquatic microorganisms both negatively (e.g. total phytoplankton and nitrifying community) and positively (e.g. cyanobacteria) [35, 36], with surfactants such as POEA being significantly more toxic than the active ingredient itself [37]. In studies where Daphnia magna were fed glyphosate residues for the whole life-cycle, the parameters growth, reproductive maturity, and offspring number were impaired [38]. Amphibians are particularly at risk, since shallow temporary ponds are areas where pollutants can accumulate without substantial dilution. Sublethal concentrations of glyphosate herbicides can cause teratogenic effects and developmental failures in amphibians and impact both larval and adult stages [39]. Environmentally relevant levels of exposure to both glyphosate and Roundup have led to major changes in the liver transcriptome of brown trout, reflective of oxidative stress, and cellular stress response [40]. Simultaneous exposure to glyphosate-based herbicides and other stressors can induce/increase adverse impacts on fish [41] and amphibians [42]. Glyphosate application reduced the number and mass of casts and reproductive success of earthworm species that inhabit agroecosystems [43]. Impacts on arthropods, among them beneficial land predators and parasites, vary [44]. Exposure to sublethal glyphosate doses impairs behaviour and cognitive capacities of honey bees [45]. Acute toxicity of glyphosate to mammals is lower relative to other herbicides. In recent years, however, glyphosate-based herbicides have been reported to be toxic to human and rat cells, impact chromosomes and organelle membranes, act as endocrine disruptors, and lead to significant changes in the transcriptome of rat liver and kidney cells [25, 46, 47]. Negative effects of glyphosate on embryonic development after injection into Xenopus laevis and chicken embryos have been linked to interference of glyphosate with retinoic acid signalling that plays an important role in gene regulation during early vertebrate development, also showing that damage can occur at very low levels of exposure [48]. The International Agency for Research on Cancer (IARC) concluded in a recent report that glyphosate is probably carcinogenic to humans [49]. When mandated by the European Commission to consider IARCS’s conclusion, EFSA identified some data gaps, but argued that, based on its own calculations about glyphosate doses humans may be exposed to, glyphosate is unlikely to pose a carcinogenic hazard to humans [50]. The current concerns over the use of glyphosate-based herbicides are summarized in a recent paper [51], which concludes that glyphosate-based herbicides should be prioritized for further toxicological evaluation and for biomonitoring studies. Glufosinate ammonium l-PPT glufosinate inhibits glutamine synthetase of susceptible plants and results in accumulation of lethal levels of ammonia [11]. Less data on eco-toxicity of glufosinate is available compared to glyphosate, presumably due to the significantly lower use of glufosinate. The formulated product is known to be (slightly) toxic to fish and aquatic invertebrates. Glufosinate has been shown to suppress some soil microorganisms, whereas others exhibited tolerance [52]. Some fungal pathogens seem to be reduced by glufosinate, potentially due to inhibition of glutamine synthetase, similar to the inhibition in plants [53]. Glufosinate may impact predatory insects, mites, and butterflies [54, 55]. Glufosinate ammonium has the potential to induce severe reproductive and developmental toxicity in rats and rabbits [56]. Because of its reproductive toxicity, use of glufosinate will be phased out in the EU by September 2017 [57]. In other countries, however, glufosinate use may not be discontinued as glufosinate-resistant crops are increasingly grown in reaction to the ever greater number of glyphosate-resistant weeds [7, 58]. Other herbicides The increasing use of “old” herbicides such as synthetic auxins, expected in the course of US deregulation of crops resistant to 2,4-D or dicamba, raises serious concerns. Synthetic analogues of the plant hormone auxin cause uncontrolled and disorganized plant growth finally killing sensitive plants, e.g. broadleaf weeds. The herbicide 2,4-D is 75 times and dicamba 400 times more toxic to broadleaf plants than glyphosate [59]. Both herbicides are highly volatile, thus increasing the potential for damage to non-target organisms due to spray drift. Sensitive crops, vegetables, ornamentals, and plants in home gardens could be damaged and both plant and arthropod communities in field edges and semi-natural habitats affected [60]. Whether a new formulation with lower volatility to be used in resistant crops, e.g. Enlist Duo comprising 2,4-D and glyphosate, and special stewardship guidelines will help reduce adverse herbicide effects, is highly questionable [59] since lower volatility of a substance may reduce exposure, but not toxicity, and stewardship programs address resistance issues in the target organisms and not toxicity issues. The herbicides 2,4-D and 2,4,5-T (2,4,5-trichlorophenoxyacetic acid) each accounted for about 50% of Agent Orange, the herbicide product sprayed by the US military in the jungle in Vietnam. Agent Orange contained highly toxic impurities, including dioxins and furans. Such impurities in actual 2,4-D containing herbicides are still a concern, especially in herbicides manufactured outside the EU and US [61]. Recently, IARC [62] classified 2,4-D as a “possible human carcinogen,” a classification which is not shared by EFSA [63]. Due to potential synergistic effects between the two ingredients in Enlist Duo on non-target plants, the US Environmental Protection Agency has considered taking legal action to revoke the registration of this herbicide mix [64]. Impacts on agricultural practice and agronomy HR crops can have various impacts on the agricultural practice and agronomy, including weed control, soil tillage, planting, crop rotation, yield, and net income. These interdependent factors influence to which degree and under which circumstances HR crops are adopted and should be taken into account, when impacts of HR crops on biodiversity are considered comprehensively. Resistance to the broad-spectrum herbicides glyphosate and glufosinate allows previously sensitive crops to survive their application, facilitating weed control and giving the farmer more flexibility, e.g. by extending the time window for spraying and post-emergence application. Conservation tillage, often recommended to reduce soil erosion and to save costs and energy, has increased and might even further expand if more HR crops are grown, as they are well adapted to low tillage systems. From 1996 to 2008, adoption of conservation tillage in US soybean cultivation increased significantly [58]. In the US, the most often stated reasons for the adoption of HR crops were improved and simplified weed control, less labour and fuel cost, no-till planting/planting flexibility, yield increase, extended time window for spraying, and in some cases decreased pesticide input [65]. Labour reduction may allow generating off-farm income [66]. In the beginning, weed resistance management did not seem that important to farmers, although weeds had become resistant to commonly used selective herbicides before [6]. Farmers were likely guided by the industry’s argument that, for a couple of reasons, among them glyphosate’s unique properties, glyphosate-resistant weeds would not evolve, at least not very rapidly [67]. Reasons for adoption of HR crops in South America were similar to those mentioned above [68]. Moreover, lack of patent protection of GM seeds facilitated the introduction of HR soybean in Argentina, as seeds could be saved for planting and resale, and could also enter the black market from where they were smuggled into Brazil [69]. Crop rotation helps maintain high productivity by reducing pesticide use and fertilizer input and can reduce pest and pathogen incidences, weed infestation, and selection pressure for weed resistance to herbicides [58]. However, in regions where HR crops are widely adopted, there is a clear trend toward monoculture and crop rotation and diversification are reduced [59]. In the US, in very large areas, crop rotation comprises only glyphosate-resistant crops, the most common rotation being HR soybean to HR corn [66]. In Argentina, within a few years, continuous HR soybean replaced 4.6 million ha of land initially dedicated to other crops, leading to a noticeable homogenization of production and landscapes [68]. Weed control patterns and herbicide use HR crops are advertised as being environmentally friendly due to less herbicide use, compared to conventional crops. However, actual trends rather support the opposite. Changes in overall amount of herbicides used are difficult to assess since different herbicides are applied at different rates. Nevertheless, reports show that with the introduction of HR crops in the US in 1996, lower amounts of herbicides were applied during the first years, with glyphosate replacing other herbicides [70]. However, since then, overall herbicide use in HR crops has increased: From 1998 to 2013, the increase in amounts (kg/ha) of active ingredient (a.i.) in HR soybean was 64%, compared to 19% in conventional soybean [71]. The cultivation of HR soybean, maize, and cotton led to an increased herbicide use in the US by an estimated 239 million kg in 1996–2011, compared to non-HR crops, with HR soybean accounting for 70% of the total increase [72]. Global glyphosate use increased too. While from 1995 to 2014, US agricultural use of glyphosate rose ninefold to 113.4 million kg, global agricultural use rose almost 15-fold to 747 million kg, with more than 50% accounted for by use on HR crops [73]. In Argentina, glyphosate use more than doubled from 2000 to 2011, due to the steady increase of the cultivation area of RoundupReady soybeans [74]. In case HR crops would be grown in Europe, it is estimated that herbicide use would rise significantly. If HR crop introduction were accompanied by resistance management, herbicide use would rise by 25%, and if it were unlimited as in the US, the increase would be 72% [75]. In addition, increased weed resistance to glyphosate leads to changes in the mix, total amount, cost, and overall environmental profile of herbicides applied to HR crops [6, 71]. In 2013, almost two-thirds of RoundupReady soybean crops received an additional herbicide treatment, compared to 14% in 2006 [71], e.g. the use of 2,4-D increased from 2002 to 2011 by almost 40% [58]. With the introduction of additional HR traits, “old” herbicides such as 2,4-D, dicamba, ACCase,5 and ALS inhibitors are used more frequently again. After deregulation in the USA of 2,4-D-resistant GM soybean and corn, 2,4-D amounts applied in the US could triple by 2020 compared to 2011, with glyphosate use remaining stable [58]. Use of 2,4-D on corn could increase over 30-fold from 2010 levels [72]. Changes in weed susceptibility Both non-selective herbicides glyphosate and glufosinate are effective on a wide range of annual grass and broadleaf weed species. The simplicity and effectiveness of weed control in HR cropping systems can be undermined in several ways: (i) by shifts in weed communities and populations resulting from the selection pressure by the applied herbicides, (ii) by escape and proliferation of transgenic plants as weedy volunteers, and (iii) by hybridization with—and HR-gene introgression into—related weedy species. While point (i) indicates changes in biodiversity, points (ii) and (iii) could increase the overall herbicide use in chemical weed management and thereby affect biodiversity further. Selection of resistance and weed shifts In general, increased reliance on herbicides for weed control leads to a shift in weed species composition. Less sensitive species and populations survive herbicide sprayings and subsequently grow and spread, whereas more sensitive species disappear. In early 2016, a total of 249 weed species (with 464 biotypes) resistant to various herbicides have been recorded, occupying hundreds of thousands of fields worldwide. Many of these biotypes are resistant to more than one herbicide mode of action [76]. Resistance genes can spread by hybridization between related weed species [77] and possibly accumulate in certain biotypes. Although glyphosate (and glufosinate) have long been considered to be low-risk herbicides with regard to the evolution of resistance [78], at least 34 glyphosate-resistant weed species (more than 240 populations) have been confirmed today, observed on millions of hectares, and increasingly associated with HR crop cultivation [76]. Many of them express resistance to other herbicide classes, too. In the US, the true area infested likely exceeds 28 million ha [79] by a sizable margin. In particular, glyphosate-resistant palmer amaranth (Amaranthus palmeri) creates control problems and poses a major economic threat to US cotton production [58]. Recently, two weed species resistant to glufosinate have been described, among them one population resistant also to glyphosate [76]. The molecular and genetic mechanisms of resistance to glyphosate are very diverse and can co-occur [77, 80]. Mutations in the EPSPS target site [81], increased EPSPS mRNA levels [82], EPSPS gene amplification [83], delayed glyphosate translocation [84], sequestration of glyphosate in vacuoles [85], and degradation in the plant [86] have been described. The increased glyphosate use has also promoted species shift among the weed flora, and several grass and broadleaf weeds are becoming problematic weeds [87]. Resistance management In the beginning of HR crop cultivation, resistance management was not considered to be an issue [67, 88], but this has later changed [89, 90]. For more than a decade now, weed scientists are recommending that farmers should implement an integrated weed management approach that consists of “many little hammers”. These “hammers” include crop and herbicide rotation, mechanical weeding, cover crops, intercropping, and mulching [91, 92]. But continuous HR cropping became common in the Americas, and farmers often simply resorted to higher glyphosate doses, additional applications (often both) and combined use of other herbicides [93]. Paraquat and synthetic auxins are recommended in tank mixtures or in rotation with glyphosate, but resistance to these herbicides is about as common as resistance to glyphosate [76]. New herbicides will not be commercialized within the near future, due to the increased development costs and the challenge to find suitable substances that comply with the stricter regulatory standards for weed efficacy and environmental and toxicological safety [6]. In this context, it is noted that companies increasingly develop and commercialize GM crops that resist higher glyphosate doses or that contain stacked HR traits, such as resistance to glyphosate and/or glufosinate, in part combined with resistance to 2,4-D, dicamba, ACCase inhibitors or HPPD6 inhibitors [6, 7, 9]. But as resistance to these herbicides is already common [76], stacking of HR traits and increased use of herbicides other than glyphosate will not reduce the selection pressure on weeds or decrease overall herbicide amounts applied. In addition, merely rotating herbicides may exacerbate resistance problems by selecting for broader resistance mechanisms in weeds [94]. Against this background, integrated weed management is strongly recommended and seems to be the only sensible strategy in the long-term. Cropping systems that employ such an approach are competitive with regard to yields and profits to systems that rely chiefly on herbicides [59]. A four-year crop rotation scheme (maize-soybean-small grain + alfalfa–alfalfa) not only helped reduce herbicide applications and fertilizer input, but also provided similar or even better yields and economic output, compared to the two-year maize-soybean rotation common in the US [95]. However, although tools for weed control other than herbicides are clearly needed, use of herbicides is still the main weed management method and the number of papers dealing with chemical control eclipse those on any other method [96]. Seed escape and proliferation of HR plants Seed escape and proliferation of HR plants can create severe management problems, especially with persistent crops. Volunteers, that is, crop plants in the field emerging from the previous crop, create problems when the following crop is a different species or a different variety of the same species. Volunteer management will become more complex if both volunteer plants and crops are resistant to the same herbicide. Crops with characteristics such as shattering and seed persistence are particularly likely to emerge as volunteers. Oilseed rape readily produces volunteers and feral plants, due to its high seed production, high seed losses during harvest and transport, and its secondary dormancy [97]. HR oilseed rape plants have been found up to 15 years after experimental releases, despite regular control of the fields for volunteers [98, 99]. The recently reported incidence of oilseed rape seed contamination by the non-approved OXY-235 variety (resistant to oxynil herbicides) in the EU might be traced back to field trials in France in the nineties [100], indicating that volunteers may emerge even after almost 20 years. Seed spill can also occur outside the fields and along transport routes, potentially leading to HR feral plants that may persist over large spatial and temporal scales [101]. HR feral oilseed rape plants have been found along transport routes in the US [102], in Switzerland [103] and Japan [104], in regions where they had never been grown. HR-gene flow to volunteers, neighbouring crops or interfertile weeds Gene flow from HR crops is a special aspect of agrobiodiversity and relevant for the purity of genetic resources. The frequency of outcrossing depends on the crop species in question and its pollination system, the distance to simultaneously flowering volunteers or relatives, and variables such as genotype, abundance and foraging behaviour of pollinators, weather conditions, time of the day, and the size of pollen donor and receiving populations. Novel combinations of transgenic events can be formed in the wild [102]. Reviews on gene flow have focused on the main GM crops [105] or on single crop species such as oilseed rape [106], maize [107], rice [108], sugar beet [109], and soybean [110]. As large pollen sources, such as crop fields, interact on a regional scale, and tend to increase gene flow, isolation distances have to be adjusted to this factor [111]. In centres of crop origin and regions where interfertile weeds, which can hybridize with crops, are present, gene flow from crop to weeds should be taken into account. This is true for oilseed rape (Brassica napus) and its close relative field mustard (Brassica rapa) in many regions of Europe [106]. Once herbicide resistance genes move into weeds, their frequency within local weed populations will increase under selection pressure by the corresponding herbicide. Hybrids do not need to be particularly fit as long as they are able to backcross with the weedy relative, a capacity which is characteristic for many interspecific hybrids. Even genotypes with a lower fitness may survive if the pollen flow is steady and the pollen source is large [112]. In some European regulation frameworks, e.g. according to the Swiss Biosafety regulations, undesired gene flow in itself is considered an adverse effect.7 Agriculture and biodiversity Intensive high-input farming is a major force driving biodiversity loss and other environmental impacts beyond the “planetary boundaries” [113, 114]. Drivers are e.g. the low number of cropped species, reduced rotation, limited seed exchange between farms, drainage, and landscape-consolidation, and increased use of pesticides. At the same time, agriculture relies on ecosystem functions and services and on biodiversity, including pollination, biological pest control, maintenance of soil structure and fertility, nutrient cycling, and hydrological services [115]. Weeds are part of the biodiversity of the agroecosystem. Although commonly regarded as pests, they offer considerable benefits to the agroecosystem by supporting a range of organisms such as decomposers, predators, pollinators, and parasitoids. They fulfil certain functions within the agroecosystem which becomes obvious when they are missing, e.g. decreasing the antagonists of pest species can increase pesticide inputs as demonstrated by exclusion experiments [116, 117], and lower numbers of pollinators may reduce yield and quality in crops depending on animal pollination [118]. Within the last decades, the diversity of the “associated agricultural flora” (a neutral expression for weeds) and the seed bank in arable soils have been reduced significantly [119, 120]. If the associated flora and arthropods are decreased in terms of abundance and diversity, this will affect the whole food chain including small mammals and farmland birds, the latter being major targets, and important indicators of agricultural change [121]. Organic farming, however, has a large positive effect on biodiversity with plants benefiting the most among taxonomic groups [122]. Indirect effects of HR agriculture on biodiversity As outlined above, broad-spectrum herbicides directly affect various organisms. However, as part of the HR weed management system, they also affect biodiversity as a whole. As glyphosate and glufosinate are effective on more weed species than other currently used herbicides or mechanical weeding and than is necessary for crop protection and productivity, they will increase the level of weed suppression. Therefore, HR crops will likely support monocultures and the excessive control of weeds in agricultural environments. Indications of increased loss of biodiversity have been found in the three years Farm Scale Evaluations (FSE), where the effects of HR cropping systems on abundance and species diversity of wild plants and arthropods were investigated across Britain [123, 124]. In glyphosate-resistant sugar beet and fodder beet and in glufosinate-resistant oilseed rape, the wild plant density, biomass, seed rain, and seed bank were lower by one-third to one-sixth than in the conventional counterparts; also less species emerged, compared to conventional management [125–127]. On the other hand, glufosinate-resistant maize showed more diverse weed species, compared to conventional maize sprayed with atrazine. However, atrazine is highly effective on a broad range of plants and no longer approved in the EU. Herbicide drift to field margins is a concern to nature conservation and biodiversity of agricultural landscapes, as field margins and hedgerows often harbour rare plant species [128]. These habitats too were negatively affected in the FSE trials [129]. In the FSE trials, the abundance of arthropods changed in the same direction as their resources and herbivores, pollinators, and beneficial natural enemies of pests were reduced [130]. The FSE findings are supported by results in a 1-year canola field study in Canada, where wild bee abundance was highest in organic fields, followed by conventional fields and lowest in HR crops [131]. This might also impact vertebrates: If weed abundance and spectra are diminished, birds [132] and migrating adult amphibians [39] may have difficulties finding enough seeds or invertebrates for food. A prominent example of indirect effects of HR crops on biodiversity on a large scale is the monarch butterfly case. Recent US data indicate that, within the last decade and in parallel to the widespread and increased adoption of HR crops, the population size of the migratory monarch butterfly (Danaus plexippus) has declined significantly, due, at least in part, to the widespread loss of milkweeds (Asclepias syriaca) in the Midwest [133–135]. Milkweed is the main food plant of monarch larvae, and the Midwest is the main breeding ground for monarchs. In case HR maize and HR oilseed rape would be widely grown in Europe, a similar scenario has been predicted for the European butterfly Queen of Spain fritillary (Issoria lathonia) [136]. Aspects of sustainable agriculture The overreliance of HR cropping systems on chemical weed control discourages the use and retention of existing alternative weed management skills. In addition, HR cropping systems are not compatible with mixed cropping systems [137]. Diversification practices, however, such as cover crops, mixed cropping, intercropping, and agroforestry, help retain soil and soil moisture better than intensive cropping and improve resiliency to climate disasters and thus support the structures of the agroecosystem which provide ecosystem services. Small multifunctional and ecologically managed farms are more productive than large farms, if total output including energy input/output is considered rather than single-crop yield. However, human labour cannot be fully substituted by mechanization in such farming approaches [138, 139]. Davis et al. [95] showed in a nine-year field study in the US corn belt that more diverse rotations including forage legumes enhanced yields of corn and soybean grain by up to 9% and reduced fertilizer application, energy use, and herbicide input significantly. Weed control and profitability remained the same, whereas labour demand was higher. As pointed out by the International Assessment of Agricultural Knowledge, Science and Technology for Development [140], agriculture is multifunctional and serves diverse needs. But for many years, agricultural science and development have focused on delivering technologies to increase farm-level productivity rather than integrating externalities such as impacts on biodiversity and the relationship between agriculture and climate change. In view of the current challenges, IAASTD concludes that business as usual is not an option. Rather increased attention needs to be directed toward new and successful existing approaches to maintain and restore soil fertility and to maintain a truly sustainable agricultural production. From the data collected and assessed, HR cropping systems seem to be no option for a sustainable agriculture that focuses also on protection of biodiversity. On the contrary, HR crops rather seem to be part of the problem. Conclusions Intensive high-input farming is known as one of the main drivers of the continuous biodiversity loss in agricultural landscapes. Diversity and abundance of the weed flora provide relevant indicators for farmland biodiversity. While HR cropping facilitates weed control for farmers and makes chemical weed management more flexible, it is accompanied by increased herbicide use and less crop rotation. Toxic effects of the complimentary herbicides on non-target organisms, e.g. soil and aquatic organisms have been shown. Due to the widespread use of glyphosate, at least 34 glyphosate-resistant weed species have evolved worldwide. To counter resistance evolution in weeds, integrated weed management is recommended. But continuous and widespread HR cropping is still very common. The commercial trend is to develop new GM crops with stacked HR traits and GM varieties with increased glyphosate resistance. However, this approach will not reduce the overall herbicide amounts used in agriculture. Control problems can also arise due to HR volunteers or feral plants, e.g. HR oilseed rape. In centres of crop origin and regions where sexually compatible plants occur, transfer of HR genes to wild relatives can be expected. Biodiversity will be affected by HR cropping systems by the very efficient removal of weed plants which in turn leads to a further reduction of flora and fauna diversity and abundance. A prominent example in this respect may be the decline of monarch butterfly populations in the US which has been linked to the massive loss of their food plants upon widespread adoption of HR crops. Since it has been shown that HR systems are not compatible with measures to stop the loss of biodiversity on farmland, a more sustainable model of agriculture is needed, which, according to the present experience, cannot reasonably integrate approaches like HR cropping. Authors’ contributions GS and MM drafted the manuscript. All authors read and approved the final manuscript. Acknowledgements None. Competing interests The authors declare that they have no competing interests. The drafting of the manuscript was financially supported by FOEN. Abbreviations 2,4-D 2,4-dichlorophenoxyacetic acid ACCase acetyl CoA carboxylase ALS acetolactate synthase AMPA aminomethylphosphonic acid CBD Convention on Biological Diversity CPB Cartagena Protocol on Biosafety EPSPS 5-enolpyruvylshikimate-3-phosphate synthase FSE Farm Scale Evaluations GAT glyphosate acetyltransferase GMOs genetically modified organisms GM genetically modified GOX glyphosate oxidoreductase HPPD hydroxyphenylpyruvate dioxygenase HR herbicide-resistant or herbicide resistance IAASTD International Assessment of Agricultural Knowledge, Science and Technology for Development Footnotes 1The European Networks of the Heads of Environment Protection Agencies EPA and European Nature Conservation Agencies ENCA. The subset of the Interest Group GMO consisted of the Environment Agency Austria EAA, the Finnish Environment Institute SYKE, the German Federal Agency for Nature Conservation BfN, the Institute for Environmental Protection and Research ISPRA, and the Swiss Federal Office for the Environment FOEN. 57. EC European Commission Implementing Regulation (EU) No 540/2011 of 25 May 2011 implementing Regulation (EC) No 1107/2009 of the European Parliament and of the council as regards the list of approved active substances. Official J Eur Union L. 2011;153(1):1–186.[Google Scholar] 72. Benbrook CM. Impacts of genetically engineered crops on pesticide use in the U.S. — the first sixteen years. Environ Sci Eur. 2012;24(1):1–13. doi: 10.1186/2190-4715-24-24. [CrossRef] [Google Scholar]
In the course of the environmental risk assessment, intended and unintended as well as cumulative long-term effects relevant to the release and the placing on the market of GMOs have to be considered comprehensively. Most commercially planted genetically modified (GM) crops are either herbicide-resistant (HR) or insect-resistant (IR), many carrying both traits. Based on recent data and experience, there are concerns that HR crops promote the further intensification of farming and may therefore increase pressure on biodiversity. Herbicide-resistant crops Herbicide resistance is the predominant trait of cultivated GM crops and will remain so in the near future. GM crops resistant to the broad-spectrum herbicides glyphosate and glufosinate have first been cultivated commercially in the 1990s [5], and GM crops with resistance to other herbicides are under development [6], or already on the market, with various HR traits increasingly combined in one crop [7]. Another, more recent strategy is the development of plants that are resistant to high concentrations of glyphosate without exhibiting a yield drag [8, 9]. Glyphosate inhibits 5-enolpyruvylshikimate-3-phosphate synthase (EPSPS), an enzyme of the shikimate pathway for biosynthesis of aromatic amino acids and phenolics in plants and microorganisms. This enzyme is not present in human or animal cells [10]. Glufosinate ammonium is an equimolar, racemic mixture of the d- and l-isomers of phosphinothricin (PPT).
no
Biotechnology
Can Genetically Modified Crops Promote Biodiversity?
yes_statement
"genetically" modified crops can promote biodiversity. biodiversity can be promoted by "genetically" modified crops
https://enveurope.springeropen.com/articles/10.1186/s12302-016-0100-y
Herbicide resistance and biodiversity: agronomic and environmental ...
Abstract Farmland biodiversity is an important characteristic when assessing sustainability of agricultural practices and is of major international concern. Scientific data indicate that agricultural intensification and pesticide use are among the main drivers of biodiversity loss. The analysed data and experiences do not support statements that herbicide-resistant crops provide consistently better yields than conventional crops or reduce herbicide amounts. They rather show that the adoption of herbicide-resistant crops impacts agronomy, agricultural practice, and weed management and contributes to biodiversity loss in several ways: (i) many studies show that glyphosate-based herbicides, which were commonly regarded as less harmful, are toxic to a range of aquatic organisms and adversely affect the soil and intestinal microflora and plant disease resistance; the increased use of 2,4-D or dicamba, linked to new herbicide-resistant crops, causes special concerns. (ii) The adoption of herbicide-resistant crops has reduced crop rotation and favoured weed management that is solely based on the use of herbicides. (iii) Continuous herbicide resistance cropping and the intensive use of glyphosate over the last 20 years have led to the appearance of at least 34 glyphosate-resistant weed species worldwide. Although recommended for many years, farmers did not counter resistance development in weeds by integrated weed management, but continued to rely on herbicides as sole measure. Despite occurrence of widespread resistance in weeds to other herbicides, industry rather develops transgenic crops with additional herbicide resistance genes. (iv) Agricultural management based on broad-spectrum herbicides as in herbicide-resistant crops further decreases diversity and abundance of wild plants and impacts arthropod fauna and other farmland animals. Taken together, adverse impacts of herbicide-resistant crops on biodiversity, when widely adopted, should be expected and are indeed very hard to avoid. For that reason, and in order to comply with international agreements to protect and enhance biodiversity, agriculture needs to focus on practices that are more environmentally friendly, including an overall reduction in pesticide use. (Pesticides are used for agricultural as well non-agricultural purposes. Most commonly they are used as plant protection products and regarded as a synonym for it and so also in this text.) Preliminary remark Together with the supplement, the present paper is a summary and an update of a comprehensive technical report which was previously published by the German Federal Agency for Nature Conservation BfN, the Austrian Environment Agency EAA, and the Swiss Federal Office for the Environment FOEN [1]. Based on this technical report (see Additional file 1), some members of the Interest Group GMO within the EPA and ENCA networks,Footnote 1 drafted a position paper which highlights key messages regarding the environmental impacts of the cultivation of genetically modified herbicide-resistant plants [2, 3]. Acting upon the key messages should improve the current environmental risk assessment of these plants. The position paper was recently addressed to relevant EU bodies with the aim to ensure adequate protection of the environment in the future. Most of the members of the IG GMO within the EPA and ENCA networks are involved in the risk assessment of GMOs in the EU and other European countries. Hence, the group consists of agencies responsible for the authorization of GMO releases as well as public institutions that provide scientific support to national administrations, e.g. as regards risk assessment. This paper summarizes the lessons learned from the experience with the use of GM plants resistant to the herbicides glyphosate and glufosinate. It is based on a more detailed paper that can be accessed as a supplement to this article. Ongoing discussions about the food and feed safety of GM crops and the concept of substantial equivalence are not in the realm of this paper. Throughout this document, the terms “herbicide resistance” and “herbicide tolerance” are used as defined by the Weed Science Society of America [4]; both terms are not used synonymously with respect to a particular response to a herbicide; they rather distinguish naturally occurring “tolerance” from engineered “resistance”. Review Agreements and regulations covering biodiversity protection Conservation of biodiversity is high on the agenda of international and national environmental policies though not very present in public awareness. The need to protect biodiversity and stop the loss was acknowledged in the Convention on Biological Diversity (CBD), internationally agreed on in 1992, and underscored by relevant decisions since thenFootnote 2 (the Convention entered into force in 1993). The Cartagena Protocol on Biosafety (CPB), adopted by the Parties to the CBD in 2000 and entering into force in 2003, seeks to protect biological diversity from potential risks posed by living modified organisms (LMOs), specially focusing on transboundary movement. Moreover, the CPB aims to facilitate information exchange on LMOs and procedures to ensure that countries can make informed decisions before they agree to import LMOs. Actually, 195 nations plus the EU are Parties to the CBD and 169 plus the EU to the Cartagena Protocol. In the EU, the deliberate release into the environment of genetically modified organisms (GMOs) is regulated by the Directive 2001/18/EC and the Directive (EU) 2015/412. Referring to the precautionary principle, the Directive 2001/18/EC aims at the protection of human and animal health and the environment. In the course of the environmental risk assessment, intended and unintended as well as cumulative long-term effects relevant to the release and the placing on the market of GMOs have to be considered comprehensively. Most commercially planted genetically modified (GM) crops are either herbicide-resistant (HR) or insect-resistant (IR), many carrying both traits. Based on recent data and experience, there are concerns that HR crops promote the further intensification of farming and may therefore increase pressure on biodiversity. Herbicide-resistant crops Herbicide resistance is the predominant trait of cultivated GM crops and will remain so in the near future. GM crops resistant to the broad-spectrum herbicides glyphosate and glufosinate have first been cultivated commercially in the 1990s [5], and GM crops with resistance to other herbicides are under development [6], or already on the market, with various HR traits increasingly combined in one crop [7]. Another, more recent strategy is the development of plants that are resistant to high concentrations of glyphosate without exhibiting a yield drag [8, 9]. Glyphosate inhibits 5-enolpyruvylshikimate-3-phosphate synthase (EPSPS), an enzyme of the shikimate pathway for biosynthesis of aromatic amino acids and phenolics in plants and microorganisms. This enzyme is not present in human or animal cells [10]. Glufosinate ammonium is an equimolar, racemic mixture of the d- and l-isomers of phosphinothricin (PPT). The l-isomer inhibits plant glutamine synthetase, leading to the accumulation of lethal levels of ammonia [11]. To confer resistance to glyphosate, most glyphosate-resistant crops express a glyphosate-insensitive EPSPS derived from Agrobacterium spp., some also the glyphosate-degrading enzyme glyphosate oxidoreductase (GOX) and/or the enzyme glyphosate acetyltransferase (GAT) that modifies glyphosate. In addition, various crops have also been transformed with one of the two bacterial genes pat or bar from Streptomyces spp. conferring resistance to glufosinate-based herbicides. These genes encode the enzyme phosphinothricin acetyl transferase (PAT) which detoxifies l-PPT. Other transgenes contained in HR crops confer resistance to ALS inhibitorsFootnote 3 (gm-hra gene), 2,4-DFootnote 4 (aad-1 and aad-12 genes) or to dicamba (dmo gene). While many transgenic HR crop species have been tested in the field, only four are widely grown commercially since the late 1990s: soybean, maize, cotton, and canola [12]. In 2013, of the 175.2 million ha global GM crop area, about 57% (99.4 million ha) were planted with HR varieties and another 27% (47 million ha) with stacked HR/IR crops [13]. Hence, 84% of the GM crops carried HR genes (146.4 million ha). HR soybean is the dominant GM crop and grown mainly in North and South America, making up about 80% of the global soybean area and 46% of the total GM crop area [12]. In GM maize and GM cotton, HR traits are often combined with IR genes. In the US, HR crops such as alfalfa, sugar beet, creeping bentgrass, and rice, are already deregulated and on the market or pending for deregulation [7]. Yields of HR crops Contrary to widespread assumptions, HR crops do not provide consistently better yields than conventional crops. Increased yield is not the main reason for farmers to adopt HR crops. If there are yield differences between HR and conventional crops, they may be due to various factors, such as scale and region of growing, site and size of farms, soil, climate, tillage system, weed abundance, genetic background/varieties, crop management, weed control practice, farmer skills, and the education of the farm operators. Reviewing data about the agronomic performance of GM crops, Areal et al. [14] concluded that although GM crops, in general, perform better than conventional counterparts in agronomic and economic (gross margin) terms, results on the yield performance of HR crops vary. A consistent yield advantage for HR crops over conventional systems could not be demonstrated [15–17]. The actual yield reduction in RoundupReady soybean observed in some studies [15] might be due to several causes: (i) the present resistance gene in the first generation of RoundupReady line (40-3-2) [18] and (ii) reduced nodular nitrogen fixation upon glyphosate application [19] and/or (iii) a weaker defence response [20]. Application of glyphosate seemed to affect nodule number and mass which have been correlated with nitrogen fixation [21] and cause the symptom of “yellow flashing” which leads to a decrease in grain yield (see discussion in [9]). The second generation RR2Y soybean (MON 89788) was introduced to provide better yields, but when tested in the greenhouse, different cultivars of RR2Y performed less well than RR 40-3-2 [22]. Eco-toxicological attributes of complementary herbicides Impacts of HR crops on biodiversity are possible through the altered herbicide management option, that is, application of a broad-spectrum herbicide during crop growth and its impacts on weed abundance and diversity. These impacts, also called indirect effects, are dealt with later in this text. Direct impacts relate to the toxicity of the herbicide, of residues, and breakdown products. First, an update of eco-toxicological attributes and direct effects of relevant complementary herbicides of HR crops is given. Glyphosate Glyphosate (C3H8NO5P; N-(phosphonomethyl) glycine), a polar, water soluble organic acid, is a potent chelator that easily binds divalent cations (e.g. Ca, Mg, Mn, and Fe) and forms stable complexes [23]. In addition to the active ingredient (a.i.) that can be present in various concentrations, herbicides usually contain adjuvants or surfactants that facilitate penetration of the active ingredient through the waxy surfaces of the treated plants. The best known glyphosate containing herbicides, the Roundup product line, often contain as a surfactant polyethoxylated tallow amine (POEA), a complex mixture of di-ethoxylates of tallow amines characterized by their oxide/tallow amine ratio, that is significantly more toxic than glyphosate [24]. The toxicity of formulations to human cells varies considerably, depending on the concentration (and homologue) of POEA [25]. Data from toxicity studies performed with glyphosate alone and over short periods of time may thus conceal adverse effects of the herbicides. Glyphosate degradation is reported to be rapid (half-lives up to 130 days) [3], but its main metabolite aminomethylphosphonic acid (AMPA) degrades more slowly. Both substances are frequently and widely found in US soils, surface water, groundwater, and precipitation [26]. Recently, the widespread occurrence of POEA and the persistence of POEA homologues in US agricultural soils have been reported [27] with currently unknown and unexplored consequences. Inhibition of the enzyme EPSPS and disruption of the shikimate pathway impacts protein synthesis and production of phenolics, including defence molecules, lignin derivatives, and salicylic acid [28]. Glyphosate impacts plant uptake and transport of micronutrients (e.g. Mn, Fe, Cu, and Zn) whose undersupply can reduce disease resistance and plant growth [20, 23]. In Argentine soils, residue levels of up to 1500 µg/kg (1.5 ppm) glyphosate and 2250 µg/kg (2.25 ppm) AMPA have been found [29]. Glyphosate affects the composition of the microflora in soil and gastrointestinal tracts differently, suppressing some microorganisms and favouring others [30, 31]. This is likely linked to varying sensitivities of bacterial EPSPS enzymes to glyphosate [32]. In the RoundupReady soybean system, the bacterial-dependent nitrogen fixation and/or assimilation can be reduced [33]. Impacts of glyphosate on fungi vary also, depending on study sites, species, pathogen inoculum, timing of herbicide application, soil properties, and tillage [28]. Mycorrhizal fungi seem to be sensitive to glyphosate [34], while others, including pathogenic Fusarium fungi, may be favoured under certain conditions since glyphosate may serve as nutrient and energy source [30]. The microbial community of the gastrointestinal tract of animals and humans may be severely affected, if, as reported by Shehata et al. for poultry microbiota in vitro [31], pathogenic bacteria (e.g. Salmonella and Clostridium) are less sensitive to glyphosate than beneficial bacteria, e.g. lactic acid bacteria. For this reason, studies on glyphosate effects on the gut microbiome of other species are needed. Glyphosate-based herbicides can affect aquatic microorganisms both negatively (e.g. total phytoplankton and nitrifying community) and positively (e.g. cyanobacteria) [35, 36], with surfactants such as POEA being significantly more toxic than the active ingredient itself [37]. In studies where Daphnia magna were fed glyphosate residues for the whole life-cycle, the parameters growth, reproductive maturity, and offspring number were impaired [38]. Amphibians are particularly at risk, since shallow temporary ponds are areas where pollutants can accumulate without substantial dilution. Sublethal concentrations of glyphosate herbicides can cause teratogenic effects and developmental failures in amphibians and impact both larval and adult stages [39]. Environmentally relevant levels of exposure to both glyphosate and Roundup have led to major changes in the liver transcriptome of brown trout, reflective of oxidative stress, and cellular stress response [40]. Simultaneous exposure to glyphosate-based herbicides and other stressors can induce/increase adverse impacts on fish [41] and amphibians [42]. Glyphosate application reduced the number and mass of casts and reproductive success of earthworm species that inhabit agroecosystems [43]. Impacts on arthropods, among them beneficial land predators and parasites, vary [44]. Exposure to sublethal glyphosate doses impairs behaviour and cognitive capacities of honey bees [45]. Acute toxicity of glyphosate to mammals is lower relative to other herbicides. In recent years, however, glyphosate-based herbicides have been reported to be toxic to human and rat cells, impact chromosomes and organelle membranes, act as endocrine disruptors, and lead to significant changes in the transcriptome of rat liver and kidney cells [25, 46, 47]. Negative effects of glyphosate on embryonic development after injection into Xenopus laevis and chicken embryos have been linked to interference of glyphosate with retinoic acid signalling that plays an important role in gene regulation during early vertebrate development, also showing that damage can occur at very low levels of exposure [48]. The International Agency for Research on Cancer (IARC) concluded in a recent report that glyphosate is probably carcinogenic to humans [49]. When mandated by the European Commission to consider IARCS’s conclusion, EFSA identified some data gaps, but argued that, based on its own calculations about glyphosate doses humans may be exposed to, glyphosate is unlikely to pose a carcinogenic hazard to humans [50]. The current concerns over the use of glyphosate-based herbicides are summarized in a recent paper [51], which concludes that glyphosate-based herbicides should be prioritized for further toxicological evaluation and for biomonitoring studies. Glufosinate ammonium l-PPT glufosinate inhibits glutamine synthetase of susceptible plants and results in accumulation of lethal levels of ammonia [11]. Less data on eco-toxicity of glufosinate is available compared to glyphosate, presumably due to the significantly lower use of glufosinate. The formulated product is known to be (slightly) toxic to fish and aquatic invertebrates. Glufosinate has been shown to suppress some soil microorganisms, whereas others exhibited tolerance [52]. Some fungal pathogens seem to be reduced by glufosinate, potentially due to inhibition of glutamine synthetase, similar to the inhibition in plants [53]. Glufosinate may impact predatory insects, mites, and butterflies [54, 55]. Glufosinate ammonium has the potential to induce severe reproductive and developmental toxicity in rats and rabbits [56]. Because of its reproductive toxicity, use of glufosinate will be phased out in the EU by September 2017 [57]. In other countries, however, glufosinate use may not be discontinued as glufosinate-resistant crops are increasingly grown in reaction to the ever greater number of glyphosate-resistant weeds [7, 58]. Other herbicides The increasing use of “old” herbicides such as synthetic auxins, expected in the course of US deregulation of crops resistant to 2,4-D or dicamba, raises serious concerns. Synthetic analogues of the plant hormone auxin cause uncontrolled and disorganized plant growth finally killing sensitive plants, e.g. broadleaf weeds. The herbicide 2,4-D is 75 times and dicamba 400 times more toxic to broadleaf plants than glyphosate [59]. Both herbicides are highly volatile, thus increasing the potential for damage to non-target organisms due to spray drift. Sensitive crops, vegetables, ornamentals, and plants in home gardens could be damaged and both plant and arthropod communities in field edges and semi-natural habitats affected [60]. Whether a new formulation with lower volatility to be used in resistant crops, e.g. Enlist Duo comprising 2,4-D and glyphosate, and special stewardship guidelines will help reduce adverse herbicide effects, is highly questionable [59] since lower volatility of a substance may reduce exposure, but not toxicity, and stewardship programs address resistance issues in the target organisms and not toxicity issues. The herbicides 2,4-D and 2,4,5-T (2,4,5-trichlorophenoxyacetic acid) each accounted for about 50% of Agent Orange, the herbicide product sprayed by the US military in the jungle in Vietnam. Agent Orange contained highly toxic impurities, including dioxins and furans. Such impurities in actual 2,4-D containing herbicides are still a concern, especially in herbicides manufactured outside the EU and US [61]. Recently, IARC [62] classified 2,4-D as a “possible human carcinogen,” a classification which is not shared by EFSA [63]. Due to potential synergistic effects between the two ingredients in Enlist Duo on non-target plants, the US Environmental Protection Agency has considered taking legal action to revoke the registration of this herbicide mix [64]. Impacts on agricultural practice and agronomy HR crops can have various impacts on the agricultural practice and agronomy, including weed control, soil tillage, planting, crop rotation, yield, and net income. These interdependent factors influence to which degree and under which circumstances HR crops are adopted and should be taken into account, when impacts of HR crops on biodiversity are considered comprehensively. Resistance to the broad-spectrum herbicides glyphosate and glufosinate allows previously sensitive crops to survive their application, facilitating weed control and giving the farmer more flexibility, e.g. by extending the time window for spraying and post-emergence application. Conservation tillage, often recommended to reduce soil erosion and to save costs and energy, has increased and might even further expand if more HR crops are grown, as they are well adapted to low tillage systems. From 1996 to 2008, adoption of conservation tillage in US soybean cultivation increased significantly [58]. In the US, the most often stated reasons for the adoption of HR crops were improved and simplified weed control, less labour and fuel cost, no-till planting/planting flexibility, yield increase, extended time window for spraying, and in some cases decreased pesticide input [65]. Labour reduction may allow generating off-farm income [66]. In the beginning, weed resistance management did not seem that important to farmers, although weeds had become resistant to commonly used selective herbicides before [6]. Farmers were likely guided by the industry’s argument that, for a couple of reasons, among them glyphosate’s unique properties, glyphosate-resistant weeds would not evolve, at least not very rapidly [67]. Reasons for adoption of HR crops in South America were similar to those mentioned above [68]. Moreover, lack of patent protection of GM seeds facilitated the introduction of HR soybean in Argentina, as seeds could be saved for planting and resale, and could also enter the black market from where they were smuggled into Brazil [69]. Crop rotation helps maintain high productivity by reducing pesticide use and fertilizer input and can reduce pest and pathogen incidences, weed infestation, and selection pressure for weed resistance to herbicides [58]. However, in regions where HR crops are widely adopted, there is a clear trend toward monoculture and crop rotation and diversification are reduced [59]. In the US, in very large areas, crop rotation comprises only glyphosate-resistant crops, the most common rotation being HR soybean to HR corn [66]. In Argentina, within a few years, continuous HR soybean replaced 4.6 million ha of land initially dedicated to other crops, leading to a noticeable homogenization of production and landscapes [68]. Weed control patterns and herbicide use HR crops are advertised as being environmentally friendly due to less herbicide use, compared to conventional crops. However, actual trends rather support the opposite. Changes in overall amount of herbicides used are difficult to assess since different herbicides are applied at different rates. Nevertheless, reports show that with the introduction of HR crops in the US in 1996, lower amounts of herbicides were applied during the first years, with glyphosate replacing other herbicides [70]. However, since then, overall herbicide use in HR crops has increased: From 1998 to 2013, the increase in amounts (kg/ha) of active ingredient (a.i.) in HR soybean was 64%, compared to 19% in conventional soybean [71]. The cultivation of HR soybean, maize, and cotton led to an increased herbicide use in the US by an estimated 239 million kg in 1996–2011, compared to non-HR crops, with HR soybean accounting for 70% of the total increase [72]. Global glyphosate use increased too. While from 1995 to 2014, US agricultural use of glyphosate rose ninefold to 113.4 million kg, global agricultural use rose almost 15-fold to 747 million kg, with more than 50% accounted for by use on HR crops [73]. In Argentina, glyphosate use more than doubled from 2000 to 2011, due to the steady increase of the cultivation area of RoundupReady soybeans [74]. In case HR crops would be grown in Europe, it is estimated that herbicide use would rise significantly. If HR crop introduction were accompanied by resistance management, herbicide use would rise by 25%, and if it were unlimited as in the US, the increase would be 72% [75]. In addition, increased weed resistance to glyphosate leads to changes in the mix, total amount, cost, and overall environmental profile of herbicides applied to HR crops [6, 71]. In 2013, almost two-thirds of RoundupReady soybean crops received an additional herbicide treatment, compared to 14% in 2006 [71], e.g. the use of 2,4-D increased from 2002 to 2011 by almost 40% [58]. With the introduction of additional HR traits, “old” herbicides such as 2,4-D, dicamba, ACCase,Footnote 5 and ALS inhibitors are used more frequently again. After deregulation in the USA of 2,4-D-resistant GM soybean and corn, 2,4-D amounts applied in the US could triple by 2020 compared to 2011, with glyphosate use remaining stable [58]. Use of 2,4-D on corn could increase over 30-fold from 2010 levels [72]. Changes in weed susceptibility Both non-selective herbicides glyphosate and glufosinate are effective on a wide range of annual grass and broadleaf weed species. The simplicity and effectiveness of weed control in HR cropping systems can be undermined in several ways: (i) by shifts in weed communities and populations resulting from the selection pressure by the applied herbicides, (ii) by escape and proliferation of transgenic plants as weedy volunteers, and (iii) by hybridization with—and HR-gene introgression into—related weedy species. While point (i) indicates changes in biodiversity, points (ii) and (iii) could increase the overall herbicide use in chemical weed management and thereby affect biodiversity further. Selection of resistance and weed shifts In general, increased reliance on herbicides for weed control leads to a shift in weed species composition. Less sensitive species and populations survive herbicide sprayings and subsequently grow and spread, whereas more sensitive species disappear. In early 2016, a total of 249 weed species (with 464 biotypes) resistant to various herbicides have been recorded, occupying hundreds of thousands of fields worldwide. Many of these biotypes are resistant to more than one herbicide mode of action [76]. Resistance genes can spread by hybridization between related weed species [77] and possibly accumulate in certain biotypes. Although glyphosate (and glufosinate) have long been considered to be low-risk herbicides with regard to the evolution of resistance [78], at least 34 glyphosate-resistant weed species (more than 240 populations) have been confirmed today, observed on millions of hectares, and increasingly associated with HR crop cultivation [76]. Many of them express resistance to other herbicide classes, too. In the US, the true area infested likely exceeds 28 million ha [79] by a sizable margin. In particular, glyphosate-resistant palmer amaranth (Amaranthus palmeri) creates control problems and poses a major economic threat to US cotton production [58]. Recently, two weed species resistant to glufosinate have been described, among them one population resistant also to glyphosate [76]. The molecular and genetic mechanisms of resistance to glyphosate are very diverse and can co-occur [77, 80]. Mutations in the EPSPS target site [81], increased EPSPS mRNA levels [82], EPSPS gene amplification [83], delayed glyphosate translocation [84], sequestration of glyphosate in vacuoles [85], and degradation in the plant [86] have been described. The increased glyphosate use has also promoted species shift among the weed flora, and several grass and broadleaf weeds are becoming problematic weeds [87]. Resistance management In the beginning of HR crop cultivation, resistance management was not considered to be an issue [67, 88], but this has later changed [89, 90]. For more than a decade now, weed scientists are recommending that farmers should implement an integrated weed management approach that consists of “many little hammers”. These “hammers” include crop and herbicide rotation, mechanical weeding, cover crops, intercropping, and mulching [91, 92]. But continuous HR cropping became common in the Americas, and farmers often simply resorted to higher glyphosate doses, additional applications (often both) and combined use of other herbicides [93]. Paraquat and synthetic auxins are recommended in tank mixtures or in rotation with glyphosate, but resistance to these herbicides is about as common as resistance to glyphosate [76]. New herbicides will not be commercialized within the near future, due to the increased development costs and the challenge to find suitable substances that comply with the stricter regulatory standards for weed efficacy and environmental and toxicological safety [6]. In this context, it is noted that companies increasingly develop and commercialize GM crops that resist higher glyphosate doses or that contain stacked HR traits, such as resistance to glyphosate and/or glufosinate, in part combined with resistance to 2,4-D, dicamba, ACCase inhibitors or HPPDFootnote 6 inhibitors [6, 7, 9]. But as resistance to these herbicides is already common [76], stacking of HR traits and increased use of herbicides other than glyphosate will not reduce the selection pressure on weeds or decrease overall herbicide amounts applied. In addition, merely rotating herbicides may exacerbate resistance problems by selecting for broader resistance mechanisms in weeds [94]. Against this background, integrated weed management is strongly recommended and seems to be the only sensible strategy in the long-term. Cropping systems that employ such an approach are competitive with regard to yields and profits to systems that rely chiefly on herbicides [59]. A four-year crop rotation scheme (maize-soybean-small grain + alfalfa–alfalfa) not only helped reduce herbicide applications and fertilizer input, but also provided similar or even better yields and economic output, compared to the two-year maize-soybean rotation common in the US [95]. However, although tools for weed control other than herbicides are clearly needed, use of herbicides is still the main weed management method and the number of papers dealing with chemical control eclipse those on any other method [96]. Seed escape and proliferation of HR plants Seed escape and proliferation of HR plants can create severe management problems, especially with persistent crops. Volunteers, that is, crop plants in the field emerging from the previous crop, create problems when the following crop is a different species or a different variety of the same species. Volunteer management will become more complex if both volunteer plants and crops are resistant to the same herbicide. Crops with characteristics such as shattering and seed persistence are particularly likely to emerge as volunteers. Oilseed rape readily produces volunteers and feral plants, due to its high seed production, high seed losses during harvest and transport, and its secondary dormancy [97]. HR oilseed rape plants have been found up to 15 years after experimental releases, despite regular control of the fields for volunteers [98, 99]. The recently reported incidence of oilseed rape seed contamination by the non-approved OXY-235 variety (resistant to oxynil herbicides) in the EU might be traced back to field trials in France in the nineties [100], indicating that volunteers may emerge even after almost 20 years. Seed spill can also occur outside the fields and along transport routes, potentially leading to HR feral plants that may persist over large spatial and temporal scales [101]. HR feral oilseed rape plants have been found along transport routes in the US [102], in Switzerland [103] and Japan [104], in regions where they had never been grown. HR-gene flow to volunteers, neighbouring crops or interfertile weeds Gene flow from HR crops is a special aspect of agrobiodiversity and relevant for the purity of genetic resources. The frequency of outcrossing depends on the crop species in question and its pollination system, the distance to simultaneously flowering volunteers or relatives, and variables such as genotype, abundance and foraging behaviour of pollinators, weather conditions, time of the day, and the size of pollen donor and receiving populations. Novel combinations of transgenic events can be formed in the wild [102]. Reviews on gene flow have focused on the main GM crops [105] or on single crop species such as oilseed rape [106], maize [107], rice [108], sugar beet [109], and soybean [110]. As large pollen sources, such as crop fields, interact on a regional scale, and tend to increase gene flow, isolation distances have to be adjusted to this factor [111]. In centres of crop origin and regions where interfertile weeds, which can hybridize with crops, are present, gene flow from crop to weeds should be taken into account. This is true for oilseed rape (Brassica napus) and its close relative field mustard (Brassica rapa) in many regions of Europe [106]. Once herbicide resistance genes move into weeds, their frequency within local weed populations will increase under selection pressure by the corresponding herbicide. Hybrids do not need to be particularly fit as long as they are able to backcross with the weedy relative, a capacity which is characteristic for many interspecific hybrids. Even genotypes with a lower fitness may survive if the pollen flow is steady and the pollen source is large [112]. In some European regulation frameworks, e.g. according to the Swiss Biosafety regulations, undesired gene flow in itself is considered an adverse effect.Footnote 7 Agriculture and biodiversity Intensive high-input farming is a major force driving biodiversity loss and other environmental impacts beyond the “planetary boundaries” [113, 114]. Drivers are e.g. the low number of cropped species, reduced rotation, limited seed exchange between farms, drainage, and landscape-consolidation, and increased use of pesticides. At the same time, agriculture relies on ecosystem functions and services and on biodiversity, including pollination, biological pest control, maintenance of soil structure and fertility, nutrient cycling, and hydrological services [115]. Weeds are part of the biodiversity of the agroecosystem. Although commonly regarded as pests, they offer considerable benefits to the agroecosystem by supporting a range of organisms such as decomposers, predators, pollinators, and parasitoids. They fulfil certain functions within the agroecosystem which becomes obvious when they are missing, e.g. decreasing the antagonists of pest species can increase pesticide inputs as demonstrated by exclusion experiments [116, 117], and lower numbers of pollinators may reduce yield and quality in crops depending on animal pollination [118]. Within the last decades, the diversity of the “associated agricultural flora” (a neutral expression for weeds) and the seed bank in arable soils have been reduced significantly [119, 120]. If the associated flora and arthropods are decreased in terms of abundance and diversity, this will affect the whole food chain including small mammals and farmland birds, the latter being major targets, and important indicators of agricultural change [121]. Organic farming, however, has a large positive effect on biodiversity with plants benefiting the most among taxonomic groups [122]. Indirect effects of HR agriculture on biodiversity As outlined above, broad-spectrum herbicides directly affect various organisms. However, as part of the HR weed management system, they also affect biodiversity as a whole. As glyphosate and glufosinate are effective on more weed species than other currently used herbicides or mechanical weeding and than is necessary for crop protection and productivity, they will increase the level of weed suppression. Therefore, HR crops will likely support monocultures and the excessive control of weeds in agricultural environments. Indications of increased loss of biodiversity have been found in the three years Farm Scale Evaluations (FSE), where the effects of HR cropping systems on abundance and species diversity of wild plants and arthropods were investigated across Britain [123, 124]. In glyphosate-resistant sugar beet and fodder beet and in glufosinate-resistant oilseed rape, the wild plant density, biomass, seed rain, and seed bank were lower by one-third to one-sixth than in the conventional counterparts; also less species emerged, compared to conventional management [125–127]. On the other hand, glufosinate-resistant maize showed more diverse weed species, compared to conventional maize sprayed with atrazine. However, atrazine is highly effective on a broad range of plants and no longer approved in the EU. Herbicide drift to field margins is a concern to nature conservation and biodiversity of agricultural landscapes, as field margins and hedgerows often harbour rare plant species [128]. These habitats too were negatively affected in the FSE trials [129]. In the FSE trials, the abundance of arthropods changed in the same direction as their resources and herbivores, pollinators, and beneficial natural enemies of pests were reduced [130]. The FSE findings are supported by results in a 1-year canola field study in Canada, where wild bee abundance was highest in organic fields, followed by conventional fields and lowest in HR crops [131]. This might also impact vertebrates: If weed abundance and spectra are diminished, birds [132] and migrating adult amphibians [39] may have difficulties finding enough seeds or invertebrates for food. A prominent example of indirect effects of HR crops on biodiversity on a large scale is the monarch butterfly case. Recent US data indicate that, within the last decade and in parallel to the widespread and increased adoption of HR crops, the population size of the migratory monarch butterfly (Danaus plexippus) has declined significantly, due, at least in part, to the widespread loss of milkweeds (Asclepias syriaca) in the Midwest [133–135]. Milkweed is the main food plant of monarch larvae, and the Midwest is the main breeding ground for monarchs. In case HR maize and HR oilseed rape would be widely grown in Europe, a similar scenario has been predicted for the European butterfly Queen of Spain fritillary (Issoria lathonia) [136]. Aspects of sustainable agriculture The overreliance of HR cropping systems on chemical weed control discourages the use and retention of existing alternative weed management skills. In addition, HR cropping systems are not compatible with mixed cropping systems [137]. Diversification practices, however, such as cover crops, mixed cropping, intercropping, and agroforestry, help retain soil and soil moisture better than intensive cropping and improve resiliency to climate disasters and thus support the structures of the agroecosystem which provide ecosystem services. Small multifunctional and ecologically managed farms are more productive than large farms, if total output including energy input/output is considered rather than single-crop yield. However, human labour cannot be fully substituted by mechanization in such farming approaches [138, 139]. Davis et al. [95] showed in a nine-year field study in the US corn belt that more diverse rotations including forage legumes enhanced yields of corn and soybean grain by up to 9% and reduced fertilizer application, energy use, and herbicide input significantly. Weed control and profitability remained the same, whereas labour demand was higher. As pointed out by the International Assessment of Agricultural Knowledge, Science and Technology for Development [140], agriculture is multifunctional and serves diverse needs. But for many years, agricultural science and development have focused on delivering technologies to increase farm-level productivity rather than integrating externalities such as impacts on biodiversity and the relationship between agriculture and climate change. In view of the current challenges, IAASTD concludes that business as usual is not an option. Rather increased attention needs to be directed toward new and successful existing approaches to maintain and restore soil fertility and to maintain a truly sustainable agricultural production. From the data collected and assessed, HR cropping systems seem to be no option for a sustainable agriculture that focuses also on protection of biodiversity. On the contrary, HR crops rather seem to be part of the problem. Conclusions Intensive high-input farming is known as one of the main drivers of the continuous biodiversity loss in agricultural landscapes. Diversity and abundance of the weed flora provide relevant indicators for farmland biodiversity. While HR cropping facilitates weed control for farmers and makes chemical weed management more flexible, it is accompanied by increased herbicide use and less crop rotation. Toxic effects of the complimentary herbicides on non-target organisms, e.g. soil and aquatic organisms have been shown. Due to the widespread use of glyphosate, at least 34 glyphosate-resistant weed species have evolved worldwide. To counter resistance evolution in weeds, integrated weed management is recommended. But continuous and widespread HR cropping is still very common. The commercial trend is to develop new GM crops with stacked HR traits and GM varieties with increased glyphosate resistance. However, this approach will not reduce the overall herbicide amounts used in agriculture. Control problems can also arise due to HR volunteers or feral plants, e.g. HR oilseed rape. In centres of crop origin and regions where sexually compatible plants occur, transfer of HR genes to wild relatives can be expected. Biodiversity will be affected by HR cropping systems by the very efficient removal of weed plants which in turn leads to a further reduction of flora and fauna diversity and abundance. A prominent example in this respect may be the decline of monarch butterfly populations in the US which has been linked to the massive loss of their food plants upon widespread adoption of HR crops. Since it has been shown that HR systems are not compatible with measures to stop the loss of biodiversity on farmland, a more sustainable model of agriculture is needed, which, according to the present experience, cannot reasonably integrate approaches like HR cropping. Notes The European Networks of the Heads of Environment Protection Agencies EPA and European Nature Conservation Agencies ENCA. The subset of the Interest Group GMO consisted of the Environment Agency Austria EAA, the Finnish Environment Institute SYKE, the German Federal Agency for Nature Conservation BfN, the Institute for Environmental Protection and Research ISPRA, and the Swiss Federal Office for the Environment FOEN. EC (2011) European Commission Implementing Regulation (EU) No 540/2011 of 25 May 2011 implementing Regulation (EC) No 1107/2009 of the European Parliament and of the council as regards the list of approved active substances. Official J Eur Union L 153(1):1–186 Rights and permissions Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
The need to protect biodiversity and stop the loss was acknowledged in the Convention on Biological Diversity (CBD), internationally agreed on in 1992, and underscored by relevant decisions since thenFootnote 2 (the Convention entered into force in 1993). The Cartagena Protocol on Biosafety (CPB), adopted by the Parties to the CBD in 2000 and entering into force in 2003, seeks to protect biological diversity from potential risks posed by living modified organisms (LMOs), specially focusing on transboundary movement. Moreover, the CPB aims to facilitate information exchange on LMOs and procedures to ensure that countries can make informed decisions before they agree to import LMOs. Actually, 195 nations plus the EU are Parties to the CBD and 169 plus the EU to the Cartagena Protocol. In the EU, the deliberate release into the environment of genetically modified organisms (GMOs) is regulated by the Directive 2001/18/EC and the Directive (EU) 2015/412. Referring to the precautionary principle, the Directive 2001/18/EC aims at the protection of human and animal health and the environment. In the course of the environmental risk assessment, intended and unintended as well as cumulative long-term effects relevant to the release and the placing on the market of GMOs have to be considered comprehensively. Most commercially planted genetically modified (GM) crops are either herbicide-resistant (HR) or insect-resistant (IR), many carrying both traits. Based on recent data and experience, there are concerns that HR crops promote the further intensification of farming and may therefore increase pressure on biodiversity. Herbicide-resistant crops Herbicide resistance is the predominant trait of cultivated GM crops and will remain so in the near future.
no
Biotechnology
Can Genetically Modified Crops Promote Biodiversity?
yes_statement
"genetically" modified crops can promote biodiversity. biodiversity can be promoted by "genetically" modified crops
https://www.canr.msu.edu/news/superweeds-secondary-pests-lack-of-biodiversity-are-frequent-gmo-concerns
Superweeds, secondary pests & lack of biodiversity are frequent ...
In 1994, modern agriculture in the United States changed dramatically when the U.S. Food and Drug Administration approved the first genetically modified organism, or GMO, for commercial cultivation on American farms. More GMO crops followed. In 1995, the U.S. Environmental Protection Agency (EPA) approved the first crop genetically modified to produce Bt toxin, a naturally occurring insecticide made by the bacterium Bacillus thuringiensis. A year later, soybeans genetically modified to resist the highly effective herbicide glyphosate (often sold under the tradename Roundup) appeared on the market. These GMO crops, and those that followed, gave farmers new tools to deploy against two of their oldest foes: insects and weeds. The benefits were many. According to a 2016 study by PG Economics, an agriculture advisory and consultancy firm based in the United Kingdom, they reduced the volume of pesticide sprays by over 8 percent and reduced greenhouse gas emissions from agricultural equipment by over 500 kilograms in the United States alone. The use of GMO crops also improved soil health by making no-till farming practical. Today, about 94 percent of soybeans and 89 percent of corn grown in the United States are herbicide-resistant, according to the U.S. Department of Agriculture (USDA) Economic Research Service. These statistics also show that Bt corn and Bt cotton comprise 81 and 85 percent of their crops, respectively. And many modern cultivars now contain both Bt and herbicide-resistant traits. GMO technology has not come without controversy. Since the introduction of GMO crops, consumers, policymakers and scientists alike have raised concerns over their potential negative effects on the environment. Critics claim that GMO crops have caused the emergence of herbicide-resistant superweeds, the rise of secondary pest insects to fill the void left by those decimated by Bt toxin, and a reduction in biodiversity in areas surrounding agricultural fields. The rise of superweeds Since the introduction of glyphosate-resistant crops, about 38 weed species worldwide have been identified that have developed resistance to glyphosate. As a result, these so-called superweeds can continue to infest fields and siphon nutrients from the valuable crops planted there, leading farmers to use other costlier – and potentially harsher – herbicides to control them. Questions quickly arose regarding the role the expanded use of GMO crops played in the development of superweeds. Bernard Zandstra, professor in the Michigan State University (MSU) Department of Horticulture, has spent his career studying weed control in fruit and vegetable crops. “Herbicide resistance in weeds comes from the regular, repeated application of the same herbicide, rather than the presence of genetically modified crops,” Zandstra said. “Glyphosate, for example, gives us a very convenient, clean and safe system. For the first 10 or 15 years [it was available], you could spray it once [and be done]. But its overuse caused resistant weeds to develop.” Zandstra points to a finding consistent across much of the research, that herbicide resistance in weeds, far from a new phenomenon linked with the advent of GMO crops, has been a long-understood consequence of pesticide overuse. Glyphosate-ready crops merely made it easier to rely on a single herbicide for all weed management. James Hancock, professor emeritus in the MSU Department of Horticulture, said that cases of GMO traits being transferred to non-GMO plants in the wild are rare. He added that while the overreliance on one herbicide has spurred the development of resistance to it, genetic modification did not have a direct role. “The idea of herbicide resistance escaping from a GMO crop into the wild is an understandable concern,” Hancock, an MSU AgBioResearch expert in plant breeding and the biosafety of GMO crops, said. “In terms of being able to survive under the stresses of the wild, our domesticated crops are wimps. They’re bred to thrive under specific, human-maintained conditions. So if they hybridize with wild plants, those offspring will almost always be weaker and less capable of surviving [than the parent plants].” Before glyphosate-based herbicides became available, farmers relied on a suite of chemicals for weed control. Individual herbicides were effective against a narrow range of plants, and farmers used them in rotation to effectively manage weeds. Rotation helped control the emergence of resistance by exposing weeds to a wide range of stresses. According to Zandstra, when used in conjunction with appropriate spraying practices, GMO crops remain invaluable to many farmers. “GMO crops, like anything else on the farm, are a tool,” he said. “When used in the context of good agronomic practices, such as rotating herbicide sprays, they become a great tool for the farmer and the consumer by making farms more efficient and economical. I recommend using another nonglyphosate herbicide alongside glyphosate, for example, so that if you do have some resistant weeds in the field, you ensure you aren’t leaving them behind to flourish.” Even if herbicide-resistant weeds were to render some current weed control technologies ineffective, Hancock said farmers and researchers would find ways to adapt to the changes. “Weed resistance just returns us to where we were before we had access to herbicides like glyphosate,” Hancock said. “That doesn’t create a new problem, it just brings an old one back that was being handled in different ways.” Out with the old pests, in with the new As major insect pests succumb to Bt crops, other secondary pests that aren’t affected by Bt toxin often take advantage of the lack of competition. A well-known instance of this occurred in China, where widespread use of Bt cotton allowed farmers to effectively control the destructive cotton bollworm while reducing pesticide use. It dramatically improved yields and cut pest management costs. The bollworm’s decline, however, allowed the population of mirid bug, historically a minor pest of cotton plants that is not effected by Bt toxin, to increase. This again led to increased pest control costs as farmers contended with a new threat that their previous practices couldn’t contain. “Bt toxin is only effective against particular species, leaving a wide array of insect pests that aren’t impacted by it,” Hancock said. “It makes logical sense that if you kill a major pest, but the chemical you’re using doesn’t kill other pests, those secondary pests will rise to take the first one’s place. But it remains in farmers’ best interests to control that first major pest, and then develop other solutions to confront the new problem.” While the use of Bt crops has helped farmers control a number of serious pests, Hancock said it was never intended as a total or permanent solution to all insect pest issues in agriculture. Effective insect control will likely always require a suite of integrated pest management practices, with Bt crops playing a significant, but not all-encompassing role. Biodiversity and landscape simplification As the popularity of GMO crops has risen, so too have concerns that the crops could reduce the biodiversity of both the agricultural landscape and the surrounding wild ecosystems. “Any time you have a successful crop variety – GMO or not – that everyone wants to plant, you inevitably reduce the biodiversity in farm fields,” Hancock said. “GMOs are no different in this regard than any other effective cultivar, but GMO crops tend to have traits that make them particularly successful for farmers. At the same time, however, there are already hundreds of GMO crop varieties available, so farmers aren’t being limited to just a handful.” “Our regulatory system guards against the release of harmful crop varieties,” Hancock said. “It’s also unlikely a breeder would be willing to release something that would have an impact on the natural ecosystem.” While GMO crops undergo years-long, thorough vetting processes, some questions still remain. MSU AgBioResearch entomologist Doug Landis has studied the phenomenon of landscape simplification and its effect on monarch butterflies for several years. Due to the effectiveness of herbicide-resistant crops, plants like common milkweed have been all but eliminated from most crop fields. While beneficial to crops, the loss of milkweed has been linked to new challenges facing insects like the monarch butterfly, which has experienced a population decline of about 80 percent in the last two decades. Many factors have been connected with the decline, with no single, definitive cause emerging. But Landis believes the simplification of agricultural landscapes may play a role. “Monarchs overwinter in Mexico, but they breed during the summer in the north central U.S. and parts of Canada,” said Landis, University Distinguished Professor in the Department of Entomology. “They depend on this breeding period to build up their numbers for the migration south, and the best information we have suggests a principle reason for monarch decline is the reduced abundance of milkweed in that north central region.” The loss of milkweed from crop fields has forced monarchs to seek out milkweeds in more dangerous grassland settings, where predators abound. In grasslands, 60 percent of monarch eggs can be lost in a single day, compared to just 10 to 20 percent in an agricultural setting, said Landis. He is quick to point out that monarchs survived for thousands of years before agriculture came to North America. He adds that rebuilding natural systems may allow them to survive and thrive again. “Ecologists talk about the importance of having enemy-free space for the survival of young of any species,” Landis said. “We believe such spaces existed in grasslands thousands of years ago, when fire and larger animals disturbed the landscape and created patches for new milkweed to grow.” Landis and his research team are exploring ways to recreate this in the modern landscape. One approach under investigation is selectively mowing small patches of milkweed in roadside grasslands to encourage the development of younger milkweed shoots preferred by monarchs. Reintroducing a diversity of weed and pest management practices, rather than relying on just a few, will benefit the entire ecosystem. “The problem is relying on one or two practices, like spraying glyphosate or dicamba (another widely used herbicide), across vast areas of land,” Landis said. “It’s a recipe for resistance and landscape simplification, which has knock-on effects for the ecosystem. Reintroducing diversity, both in practices and in the way we structure the landscape, brings resilience to the ecosystem that’s lost when we rely on just one or two things.” The new difficulties in weeds, pests and biodiversity encountered in modern agriculture don’t stem directly from the use of GMO crops, but rather from treating the crops’ traits as a final solution to weed and pest management issues. Treating GMO crops as one among many tools in a management plan will help limit the spread of superweeds and secondary pests, as well as preserve landscape biodiversity. “GMO crops are a very powerful, safe technology when used alongside good agronomic practices,” Zandstra said. “They’re tools that have helped us feed our society and helped growers earn a living, and that contribute to the plentiful, inexpensive food we enjoy in this country.” This article was published in Futures, a magazine produced twice per year by Michigan State University AgBioResearch. To view past issues of Futures, visit www.futuresmagazine.msu.edu. For more information, email Holly Whetstone, editor, at whetst11@msu.edu or call 517-355-0123. MSU is an affirmative-action, equal-opportunity employer, committed to achieving excellence through a diverse workforce and inclusive culture that encourages all people to reach their full potential. Michigan State University Extension programs and materials are open to all without regard to race, color, national origin, gender, gender identity, religion, age, height, weight, disability, political beliefs, sexual orientation, marital status, family status or veteran status. Issued in furtherance of MSU Extension work, acts of May 8 and June 30, 1914, in cooperation with the U.S. Department of Agriculture. Quentin Tyler, Director, MSU Extension, East Lansing, MI 48824. This information is for educational purposes only. Reference to commercial products or trade names does not imply endorsement by MSU Extension or bias against those not mentioned. The 4-H Name and Emblem have special protections from Congress, protected by code 18 USC 707.
In 1994, modern agriculture in the United States changed dramatically when the U.S. Food and Drug Administration approved the first genetically modified organism, or GMO, for commercial cultivation on American farms. More GMO crops followed. In 1995, the U.S. Environmental Protection Agency (EPA) approved the first crop genetically modified to produce Bt toxin, a naturally occurring insecticide made by the bacterium Bacillus thuringiensis. A year later, soybeans genetically modified to resist the highly effective herbicide glyphosate (often sold under the tradename Roundup) appeared on the market. These GMO crops, and those that followed, gave farmers new tools to deploy against two of their oldest foes: insects and weeds. The benefits were many. According to a 2016 study by PG Economics, an agriculture advisory and consultancy firm based in the United Kingdom, they reduced the volume of pesticide sprays by over 8 percent and reduced greenhouse gas emissions from agricultural equipment by over 500 kilograms in the United States alone. The use of GMO crops also improved soil health by making no-till farming practical. Today, about 94 percent of soybeans and 89 percent of corn grown in the United States are herbicide-resistant, according to the U.S. Department of Agriculture (USDA) Economic Research Service. These statistics also show that Bt corn and Bt cotton comprise 81 and 85 percent of their crops, respectively. And many modern cultivars now contain both Bt and herbicide-resistant traits. GMO technology has not come without controversy. Since the introduction of GMO crops, consumers, policymakers and scientists alike have raised concerns over their potential negative effects on the environment. Critics claim that GMO crops have caused the emergence of herbicide-resistant superweeds, the rise of secondary pest insects to fill the void left by those decimated by Bt toxin, and a reduction in biodiversity in areas surrounding agricultural fields.
no
Biotechnology
Can Genetically Modified Crops Promote Biodiversity?
yes_statement
"genetically" modified crops can promote biodiversity. biodiversity can be promoted by "genetically" modified crops
https://www.fao.org/organicag/oa-faq/oa-faq6/en/
Organic Agriculture: What are the environmental benefits of organic ...
What are the environmental benefits of organic agriculture? Sustainability over the long term. Many changes observed in the environment are long term, occurring slowly over time. Organic agriculture considers the medium- and long-term effect of agricultural interventions on the agro-ecosystem. It aims to produce food while establishing an ecological balance to prevent soil fertility or pest problems. Organic agriculture takes a proactive approach as opposed to treating problems after they emerge. Soil. Soil building practices such as crop rotations, inter-cropping, symbiotic associations, cover crops, organic fertilizers and minimum tillage are central to organic practices. These encourage soil fauna and flora, improving soil formation and structure and creating more stable systems. In turn, nutrient and energy cycling is increased and the retentive abilities of the soil for nutrients and water are enhanced, compensating for the non-use of mineral fertilizers. Such management techniques also play an important role in soil erosion control. The length of time that the soil is exposed to erosive forces is decreased, soil biodiversity is increased, and nutrient losses are reduced, helping to maintain and enhance soil productivity. Crop export of nutrients is usually compensated by farm-derived renewable resources but it is sometimes necessary to supplement organic soils with potassium, phosphate, calcium, magnesium and trace elements from external sources. Water. In many agriculture areas, pollution of groundwater courses with synthetic fertilizers and pesticides is a major problem. As the use of these is prohibited in organic agriculture, they are replaced by organic fertilizers (e.g. compost, animal manure, green manure) and through the use of greater biodiversity (in terms of species cultivated and permanent vegetation), enhancing soil structure and water infiltration. Well managed organic systems with better nutrient retentive abilities, greatly reduce the risk of groundwater pollution. In some areas where pollution is a real problem, conversion to organic agriculture is highly encouraged as a restorative measure (e.g. by the Governments of France and Germany). Air and climate change. Organic agriculture reduces non-renewable energy use by decreasing agrochemical needs (these require high quantities of fossil fuel to be produced). Organic agriculture contributes to mitigating the greenhouse effect and global warming through its ability to sequester carbon in the soil. Many management practices used by organic agriculture (e.g. minimum tillage, returning crop residues to the soil, the use of cover crops and rotations, and the greater integration of nitrogen-fixing legumes), increase the return of carbon to the soil, raising productivity and favouring carbon storage. A number of studies revealed that soil organic carbon contents under organic farming are considerably higher. The more organic carbon is retained in the soil, the more the mitigation potential of agriculture against climate change is higher. However, there is much research needed in this field, yet. There is a lack of data on soil organic carbon for developing countries, with no farm system comparison data from Africa and Latin America, and only limited data on soil organic carbon stocks, which is crucial for determining carbon sequestration rates for farming practices. Biodiversity. Organic farmers are both custodians and users of biodiversity at all levels. At the gene level, traditional and adapted seeds and breeds are preferred for their greater resistance to diseases and their resilience to climatic stress. At the species level, diverse combinations of plants and animals optimize nutrient and energy cycling for agricultural production. At the ecosystem level, the maintenance of natural areas within and around organic fields and absence of chemical inputs create suitable habitats for wildlife. The frequent use of under-utilized species (often as rotation crops to build soil fertility) reduces erosion of agro-biodiversity, creating a healthier gene pool - the basis for future adaptation. The provision of structures providing food and shelter, and the lack of pesticide use, attract new or re-colonizing species to the organic area (both permanent and migratory), including wild flora and fauna (e.g. birds) and organisms beneficial to the organic system such as pollinators and pest predators. The number of studies on organic farming and biodiversity increased significantly within the last years. A recent study reporting on a meta-analysis of 766 scientific papers concluded that organic farming produces more biodiversity than other farming systems. Genetically modified organisms. The use of GMOs within organic systems is not permitted during any stage of organic food production, processing or handling. As the potential impact of GMOs to both the environment and health is not entirely understood, organic agriculture is taking the precautionary approach and choosing to encourage natural biodiversity. The organic label therefore provides an assurance that GMOs have not been used intentionally in the production and processing of the organic products. This is something which cannot be guaranteed in conventional products as labelling the presence of GMOs in food products has not yet come into force in most countries. However, with increasing GMO use in conventional agriculture and due to the method of transmission of GMOs in the environment (e.g. through pollen), organic agriculture will not be able to ensure that organic products are completely GMO free in the future. A detailed discussion on GMOs can be found in the FAO publication "Genetically Modified Organisms, Consumers, Food Safety and the Environment". Ecological services. The impact of organic agriculture on natural resources favours interactions within the agro-ecosystem that are vital for both agricultural production and nature conservation. Ecological services derived include soil forming and conditioning, soil stabilization, waste recycling, carbon sequestration, nutrients cycling, predation, pollination and habitats. By opting for organic products, the consumer through his/her purchasing power promotes a less polluting agricultural system. The hidden costs of agriculture to the environment in terms of natural resource degradation are reduced.
At the gene level, traditional and adapted seeds and breeds are preferred for their greater resistance to diseases and their resilience to climatic stress. At the species level, diverse combinations of plants and animals optimize nutrient and energy cycling for agricultural production. At the ecosystem level, the maintenance of natural areas within and around organic fields and absence of chemical inputs create suitable habitats for wildlife. The frequent use of under-utilized species (often as rotation crops to build soil fertility) reduces erosion of agro-biodiversity, creating a healthier gene pool - the basis for future adaptation. The provision of structures providing food and shelter, and the lack of pesticide use, attract new or re-colonizing species to the organic area (both permanent and migratory), including wild flora and fauna (e.g. birds) and organisms beneficial to the organic system such as pollinators and pest predators. The number of studies on organic farming and biodiversity increased significantly within the last years. A recent study reporting on a meta-analysis of 766 scientific papers concluded that organic farming produces more biodiversity than other farming systems. Genetically modified organisms. The use of GMOs within organic systems is not permitted during any stage of organic food production, processing or handling. As the potential impact of GMOs to both the environment and health is not entirely understood, organic agriculture is taking the precautionary approach and choosing to encourage natural biodiversity. The organic label therefore provides an assurance that GMOs have not been used intentionally in the production and processing of the organic products. This is something which cannot be guaranteed in conventional products as labelling the presence of GMOs in food products has not yet come into force in most countries. However, with increasing GMO use in conventional agriculture and due to the method of transmission of GMOs in the environment (e.g. through pollen), organic agriculture will not be able to ensure that organic products are completely GMO free in the future. A detailed discussion on GMOs can be found in the FAO publication "Genetically Modified Organisms, Consumers, Food Safety and the Environment". Ecological services.
no
Biotechnology
Can Genetically Modified Crops Promote Biodiversity?
yes_statement
"genetically" modified crops can promote biodiversity. biodiversity can be promoted by "genetically" modified crops
https://www.frontiersin.org/articles/10.3389/fpls.2022.1027828
Genetically engineered crops for sustainably enhanced ... - Frontiers
1Department of Integrative Agriculture, College of Agriculture and Veterinary Medicine, United Arab Emirates University, Al−Ain, Abu−Dhabi, United Arab Emirates 2Biotechnology and Plant Improvement Laboratory, Centre of Biotechnology of Sfax, University of Sfax, Sfax, Tunisia 3Michigan State University, Plant and Soil Science Building, East Lansing, MI, United States Genetic modification of crops has substantially focused on improving traits for desirable outcomes. It has resulted in the development of crops with enhanced yields, quality, and tolerance to biotic and abiotic stresses. With the advent of introducing favorable traits into crops, biotechnology has created a path for the involvement of genetically modified (GM) crops into sustainable food production systems. Although these plants heralded a new era of crop production, their widespread adoption faces diverse challenges due to concerns about the environment, human health, and moral issues. Mitigating these concerns with scientific investigations is vital. Hence, the purpose of the present review is to discuss the deployment of GM crops and their effects on sustainable food production systems. It provides a comprehensive overview of the cultivation of GM crops and the issues preventing their widespread adoption, with appropriate strategies to overcome them. This review also presents recent tools for genome editing, with a special focus on the CRISPR/Cas9 platform. An outline of the role of crops developed through CRSIPR/Cas9 in achieving sustainable development goals (SDGs) by 2030 is discussed in detail. Some perspectives on the approval of GM crops are also laid out for the new age of sustainability. The advancement in molecular tools through plant genome editing addresses many of the GM crop issues and facilitates their development without incorporating transgenic modifications. It will allow for a higher acceptance rate of GM crops in sustainable agriculture with rapid approval for commercialization. The current genetic modification of crops forecasts to increase productivity and prosperity in sustainable agricultural practices. The right use of GM crops has the potential to offer more benefit than harm, with its ability to alleviate food crises around the world. 1 Introduction Agriculture faces severe challenges for delivering food and maintaining nutritional security through sustainable practices. relation to the concept of sustainability, sustainable agriculture is defined as a system of growing crops for the short and long-term period without damaging the environment, society, and the economy for the present and future generations (Tripathi et al., 2022). The main goals of sustainable agriculture are to produce high yield of healthy crop products, efficiently use the environmental resources with minimal damages, enhance the quality of life within the society through the just distribution of food, and provide economic benefits for the farmers (Tseng et al., 2020). These goals have become a prominent issue of discussion in agriculture in the past few years and have been recognized widely in scientific communications, since it is difficult to produce large amounts of food with minimal environmental degradation. However, there has been a remarkable breakthrough in the field of agriculture through plant genetic modification. Plant biotechnology has generated products that helped agriculture sector to achieve enhanced yields in a more sustainable manner. It has witnessed an increase in the production capacity that is as huge as it was during the period of the green revolution in the early 70’s (Raman, 2017). A genetically modified (GM) crop is defined as any plant whose genetic material has been manipulated in a particular way that does not occur under natural conditions, but with the aid of genetic techniques (Sendhil et al., 2022). Agriculture is the first sector that invested heavily in the use of genetic modifications (Raman, 2017). The massive experiments in agricultural biotechnology have enabled the development of suitable traits in plants for food production. The employment of genetic tools for the introduction of a foreign gene, as well as the silencing and expressing of a specific gene in plants, have brought a dramatic expansion of GM crops (Kumar et al., 2020). It has led to the propagation of crops that are disease resistant, environmental stress tolerant, and have an improved nutrient composition for consumers (Batista et al., 2017). The techniques for the improvement of plants for food production have been undertaken since the humankind stopped migration and relied on agriculture for their survival. At present, more advanced molecular tools are developed for specific genetic manipulation of crops than the conventional methods. Genome editing is the process of making targeted improvements to a plant’s genome, specifically within plant’s own family (Kaur et al., 2022). Its precision in changing almost any desired location in the genome makes it discrete from other breeding methods. Most of the changes that are made through genome editing occurs naturally within the plants, through traditional breeding or evolution (Graham et al., 2020). However, through genome editing such results are obtained within years rather than decades. With this method, there is no addition of foreign genes, and it is more accurate and predictable than earlier techniques of plant genetic modification (Kaur et al., 2022). In the twenty-first century, the genetic modification of crops is considered a potential solution for achieving the goals of sustainable agriculture (Oliver, 2014). However, the use of GM crops has raised complex issues and dilemmas related to their safety and sustainability. There have been several debates which have led some countries to contest the use, cultivation, and commercialization of GM crops (Kikulwe et al., 2011). Specifically, the majority of European and Middle Eastern countries have imposed full or partial limitations on the commercialization of GM crops. Regulatory approval for the commercialization of GM crops is hampered by poor communication and awareness brought about by consumer mistrust (Mustapa et al., 2021). Moreover, the difficult process of completing risk assessments and meeting biosafety regulations, has only compounded the existing mistrust of GM crops, based on ethics, history and customs. Nevertheless, because the GM crops are considered as good candidates for sustainable food production, it is imperative to perform the risk assessment of any developed GM crop, exploring their negative and positive consequences for the current agricultural developments. In this regard, the goal of the present study is to evaluate the use of genetic manipulation and genome editing of crops for overcoming the global food challenges in a sustainable manner. It aims to review current knowledge of GM crops, the concerns and dilemmas associated with them and provides appropriate solutions to overcome them. The study further delivers several perspectives on their incorporation into sustainable food production systems and eliminate the mistrust placed on GM crops for the achievement of Sustainable Development Goals (SDGs). 2 Developmental pathway of GM crops over the years The genetic modification of plants dates back approximately 10,000 years with the practice of artificial selection and selective breeding. The selection of parents with favorable traits and their utilization in breeding programs has facilitated the introgression of these traits into their offspring’s (Raman, 2017). For instance, artificial selection of maize out of weedy grasses having smaller ears and less kernels, has resulted in the generation of edible maize cultivars (Doebley, 2006). In 1946, the advancement leading to contemporary genetic modification took place, with the scientist’s discovery of genetic material being moveable between various species (Figure 1) (James, 2011). This was accompanied with the identification of the double helical DNA structure and concept of the central dogma in 1954 by Watson and Crick (Cobb, 2017). Successive advances in the experiments by Boyer and Cohen in 1973 that included the extraction and introduction of DNA between various species resulted in the engineering of the World’s first GM organism (Cohen et al., 1973). In 1983, antibiotic resistant tobacco and petunia, first GM crops, were auspiciously developed by three independent scientists (Fraley, 1983). FIGURE 1 Figure 1 Timeline of various events from the discovery of genes being transferable during 1946 leading to the contemporary era of advanced tools for developing GM crops. In 1990, GM tobacco plants that were resistance to tobacco mosaic virus (TMV) were first commercialized by China (Wu and Butz, 2004). In 1994, Food and Drug Administration (FDA) approved the Flavr Savr tomato (Calgene, USA) as the first GM crop for human consumption (Vega Rodríguez et al., 2022). The antisense technology was used to genetically modify this tomato plant by interfering the production of the enzyme polygalacturonase, major enzyme responsible for pectin disassembly in ripening fruit, that retarded its ripening and protected it from rot (Bawa and Anilakumar, 2013). Several transgenic plants were approved since then for expansive production in 1995 and 1996. For instance, transgenic cantaloupe Charentais melons expressing an antisense ACC oxidase gene were developed to block their ripening process (Ayub et al., 1996). Some of the GM crops that received initial FDA-approval included cotton, corn and potatoes (modification of Bacillus thuringiensis (Bt) gene, Monsanto), Roundup Ready soybeans (resistance to glyphosate, Monsanto), and canola (increased oil production, Calgene) (Bawa and Anilakumar, 2013). At present, the genetic modifications are performed on various cereals, fruits, and vegetables that includes rice, wheat, strawberry, lettuce, and sugarcane. The genetic modifications are also carried out to increase vaccine bioproduction in plants, improved nutrients in animal feed, and for conferring environmental stresses such as salinity and drought (Kurup and Thomas, 2020). 2.1 Method of genetic modification of crops The creation of a GM crop is a complex phenomenon that involves several steps, from the identification of the target gene to the regeneration of transformed plants (Figure 2). FIGURE 2 Figure 2 Illustration of the process of genetic modification of crops. It involves the identification of gene of interest, its isolation, and insertion into the genome of a desired plant species. The modified plants are regenerated and used for commercialization. 2.1.1 Target gene identification Developing a GM plant requires the determination of the gene of interest for a particular trait such as drought tolerance gene that is already present in a specific plant species (Snow and Palma, 1997). The genes are identified using the available data and knowledge about their sequences, structures, and functionalities. In case of an unknown gene, a much laborious method will be used, such as map-based cloning. The gene of interest is isolated and amplified using the Polymerase Chain Reaction (PCR). It allows the desired gene to be enlarged into several million copies for the gene assembly (Schouten et al., 2006). 2.1.2 Cloning of the gene of interest and its insertion into a transfer vector After several copies of genes are attained, it is inserted into a construct downstream a strong promoter and upstream a terminator. This complex is then transferred into bacterial plasmid (manufacturing vectors), allowing for the duplication of gene of interest within the bacterial cell (Zupan and Zambryski, 1995). The DNA construct with the gene of interest is introduced into the plants via Agrobacterium tumefaciens or gene gun (particle bombardment) (Lacroix and Citovsky, 2020). 2.1.3 Modified plant cells selection and plant regeneration When using antibiotic resistance as a selectable marker gene, only transformed plant cells survive and will be regenerated to entire plant using different regeneration techniques (Ibáñez et al., 2020). Several genetic analyses are performed for the determination of insertion and activation of the gene of interest and its interaction with different plant pathways that may cause unintended changes in the final traits within the plants (Shrawat and Armstrong, 2018). The transformed plants are introduced into the field conditions and risk assessments are performed for their environmental and health impacts (Giraldo et al., 2019). Nonetheless, plants with foreign genes have remained in the scrutiny of society for crop production. To overcome these concerns related to transgenic crops, newer biotechnological techniques, such as cisgenesis and intragenesis, are developed as alternatives to transgenesis (Holme et al., 2013; Kumar et al., 2020). In these methods, genetic material used for trait enhancement are from identical or related plant species with sexually compatible genes. Besides these techniques, genome editing tools has enabled the plant transformation with ease, accuracy, and specificity. Some of these methods including Zinc Finger Nucleases (ZFNs), Transcription Activator-Like Effector Nucleases (TALENs), and Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR)/Cas system, were directed towards the concerns related to the unpredictability and inefficiency of traditional transgenesis (Bhardwaj and Nain, 2021). These tools are set for developing enhanced plant varieties through accurate modification of endogenous genes and site-specific introduction of target genes. 2.2 Status of GM crops The global production status of GM crops has increased between the year 1996 to 2019, from 1.7 to 190.4 million ha with approximately 112-fold increase (Table 1; Figure 3) (ISAAA, 2019). Subsequently, a large increase occurred in the commercialization of GM crops at an elevated rate in the history of present-day agriculture. Currently, the world’s largest GM crops producer is USA with 71.5 Mha (37.5%), with GM cotton, maize, and soybean accounting for 90% of its production (ISAAA, 2019). Brazil was the second largest GM crops producer with 52.8 Mha (27.7%) and Argentina was the third largest producer with 24 Mha. Canada and India were fourth and fifth largest producers with 12.5 and 11.9 Mha, respectively (ISAAA, 2019). TABLE 1 Table 1 The proportion of area covered and common GM crops in various parts of the world. FIGURE 3 Figure 3 Percentage of Globally adopted GM crops and their production area (hectares) in various countries. The largest proportion of GM crops grown are soybeans (48%) and the USA covers a substantial area of 71.5 Mha with different GM crops. In 2019, the largest area of GM crops was possessed by soybean 48%, GM maize occupied an area of 60.9 million hectares globally, around 32% of the global maize production (Figure 3) (Turnbull et al., 2021). GM cotton covered 14% of the global area of cotton production in 2019 with 25.7 Mha of area. While GM canola occupied 5% from its 27% of global production in 2019 (Turnbull et al., 2021). In contrast to GM maize, soybean, canola and cotton, some of the GM crops that were planted in different countries also included sugarcane, papaya, alfalfa, squash, apples and sugar beets. There has been a sharp increase in the approval of the number of plant species with GM varieties. Around 44 countries have provided regulatory acceptance to 40 GM crops and to 509 events of genetic modification since January 2022 (ISAAA, 2020). This manipulation includes 41 commercial traits for use in cultivation, food, and feed. 3 Concerns and related issues of GM crops production The inception of GM crops has been controversial mainly due to the ethical concerns and issues of sustainability surrounding the negative impacts of GM crops. These issues range in different forms such as the detrimental effects of GM crops on the environment and human health, the ideology of creating new life forms within the society, and the intellectual property ownership of GM crops that provides economic benefits to specific people (Oliver, 2014). Most of these issues arise due to the arguments that farmers and seed companies attain the benefits of the GM crops rather than the consumer (Raman, 2017). 3.1 In relation to the environment The introduction of GM crops may cause adverse impacts on the environmental conditions, which has been raised ethically by certain sections of the society (Figure 4). It has been argued that the GM crops pose a threat to the decline of crop biodiversity due to of the hybridization of GM crops with related non-GM crops through the transfer of pollen (Fernandes et al., 2022). The GM crops may become invasive over time and affect the population of local wild crop species. The use of specific chemical herbicides for controlling weeds that grow in the fields with GM crops tolerant to that chemical herbicide will lead to the appearance of highly resistant weeds that will be difficult to control. Due to the high use of chemicals to control those weeds, soil and water degradation can also occur (Sharma et al., 2022). The use of GM crops can have negative impacts on non-target organisms such as predators and honeybees (Roberts et al., 2020). For instance, the spread of the genetically manipulated herbicide tolerant corn and soybean, with the use of chemical herbicides has damaged the habitat and population of the monarch butterfly in North America (Boyle et al., 2019). It is considered that such environmental risk raised by the GM crops are difficult to be eliminated. FIGURE 4 Figure 4 Major environmental concern related to the GM crops. The manipulated crops are widely prevented for their gene flow and its detrimental effect on the natural resources and biodiversity. 3.2 In relation to the human health The biggest ethical concern for the genetic modification of crops is their harmful effects on the human beings (Figure 5). It is assumed that consumption of the GM crops can result in the development of certain diseases that can be immune to antibiotics (Midtvedt, 2014). This immunity develops through the transfer of antibiotic resistant gene from the GM crops into humans after the consumption (Midtvedt, 2014). The long-term effects of GM crops are not known, which decreases their consumption rate. It is also found that a number of cultural and religious communities are against these crops and considers them detrimental for humans. It is believed that GM crops can trigger allergic reactions in human beings. In a study conducted for enhancing nutritional quality of soybeans (Glycine max), a methionine-rich 2S albumin from the Brazilian nut was transferred into transgenic soybeans. Since the Brazil nut is a common allergenic food, the allergenicity testing of transgenic soybean indicated allergenic reaction on three subjects through skin-prick testing. This allergenicity was associated to the introduction of 2S albumin gene of Brazil nut into the soybeans (Nordlee et al., 1996; EFSA et al., 2022). There are also assumptions that GM crops can cause the development of cancerous cells in human beings (Touyz, 2013). It is argued that cancer diseases are caused due to the mutations in the DNA, and the introduction of new genes into human body may cause such mutations (Mathers, 2007). Antibiotic resistance genes from genetically modified plants, used as selectable marker genes can get transferred to bacteria in the gastro-intestinal tract of humans (Karalis et al., 2020). However, the risk of such occurrence is very low, but it has to be considered when assessing the biosafety of the transgenic plants during field trials or commercialization approvals. The health risk of foods derived from genetically engineered crops are still being debated for rigorous evidence among the scientific community. 3.3 In relation to the development and intellectual property rights of GM crops In the ethical debate of GM crops for sustainability, the philosophical reasons are fundamental against the development of these crops. It is viewed that genetic modifications of crops are inappropriate interference in the life of an organism (Evanega et al., 2022). The gap in this ethical ideology is aggravated in developing countries due to the prominent role of large Biotech companies in deciding how life forms are to be altered to make benefit from them. The concerns of the intellectual property rights, patents of these crops and their ownerships are at the heart of the ethical issues (Xiao and Kerr, 2022). The private sector provides the majority of agricultural inputs such as the fertilizers, pesticides and seeds of improved crop varieties that farmers stored and reused season to season (Lencucha et al., 2020). This practice of seed reuse has made it difficult to gain benefits from the investments in artificial breeding. Nonetheless, production of hybrid species and advances in genetic technologies, it became possible to protect the new crop varieties that were developed, especially the larger-volume crops, such as the soybean and maize plants (Liu and Cao, 2014). This is particularly true for the genetic modification tools, which provide producers a stronger intellectual property right for their plants (Brookes and Barfoot, 2020). The patent rights provide monopoly power to the seed companies, which require the farmers to purchase the seeds from the patent owners during each year of plantation (Maghari and Ardekani, 2011). These seeds are known as terminator seeds that develop into infertile crops. The terminator technology was used for developing such seeds that prevented the diversion of genetic modifications to other plants, but limited farmers seed propagation (Niiler, 1999). This made farmers to purchase new seeds during each growing season, giving seed producers larger authority over the utilization of their seeds. It is considered to be ethically wrong to develop plants whose seeds are sterile that farmers cannot use for the second year of plantation (Bawa and Anilakumar, 2013). However, terminator seeds that produced infertile crops were temporarily terminated. The intellectual property rights for the GM crops provided protection to the crop varieties and limited farmers in using the seeds of GM crops for another cycle (Rodrigues et al., 2011). Moreover, intellectual property rights created a barrier for innovation as it provided a limited access to GM crops for several purposes (Redden, 2021). Despite of these concerns of the GM crops, they are considered as one of the tools for achieving the sustainable food production. However, it needs to be evaluated for possible solutions for their negative impacts in securing their benefits. The detrimental effects of the GM crops can be reduced or eliminated through appropriate measures that needs to be taken at different stages of incorporation, marketing and human consumption for ensuring that the GM plants are as harmless as the non-GM crops. This will lead to meeting the goals of sustainability and allow for the incorporation of GM crops into sustainable food production (Figure 6). FIGURE 6 Figure 6 Schematic representation of the pathway for countering the concerns of GM crops. The development of separate settings, regulatory framework for biosafety and risk assessments, and commercialization continuum for GM crops will lead to their beneficial impacts, which will result in meeting the goals of sustainability. 4.1 Towards the negative impacts on the environment One of the major concerns of GM crops is their potential damage to the environment. It affects the environment through the gene flow that occurs from the GM crops to the neighboring non-GM crops via the pollen, a phenomenon known as the genetic pollution (Fitzpatrick and Ried, 2019). It is stated that genetic pollution will result in decline of biodiversity. However, the transfer of genes can occur through the pollen of plants at a distance between 50 m to 100 m (Carrière et al., 2021). Therefore, a feasible solution suggested towards the use of GM crops is the practice of growing the GM crops at a distance farther away from the non-GM crops that will lower the chances of gene flow. In addition, such a solution will contribute towards the lowering of crop pollen viability and competitiveness after moving through long distance between the plants (Nishizawa et al., 2010). It was reported in a study that a gene resistant to herbicide from a field of genetically modified oilseed rape moved to the neighboring non-genetically modified oilseed rape (Nishizawa et al., 2010). The investigation from this study indicated that one out of ten thousand oilseed rape contained the modified gene at a distance of 50 m (Nishizawa et al., 2010). Therefore, it is suggested to grow the GM crops at a distance of 50 m away from the non-GM crops during their use in sustainable agriculture, as this practice will reduce the percentage of gene flow (Carrière et al., 2021). 4.2 Towards the negative impacts on the societal and community health The sustainable agriculture is focused on the health effects of GM crops on the current and future generations. The health effects of GM crops remain an ethical issue that needs to be investigated due to lack of direct studies on the human health effects and the consumption of GM crops (Garcia-Alonso et al., 2022). The possible solutions towards the health effects of GM crops are the constant regulation of these crops through different biosafety testing and risk assessment by health authorities before consumption (Akinbo et al., 2021). The biosafety testing of GM crops should consider the standard that foods developed from GM plants are intended to be as safe as genetically similar varieties of non-GM plants. To date there is no solid evidence that GM crops approved in the US and other countries have harmed humans or animals that had consumed them. This highlights that the safety assessment of GM crops is quite robust. However, to predict the adverse effect of GM crops consumption on human health, scientifically sound and long-term studies need to be conducted under controlled and validated experimental conditions on animals such as rats, cows, pigs, etc. 4.3 Towards the negative impacts on the economy Since the GM crops are passing through various regulatory measures and meeting the testing standards, these crops are still prevented from release to the market (Davison and Ammann, 2017). For instance, with the introduction of new drugs, people are always given a choice to be the first users or second users, but after certain stages of testing, the drugs are released to the market for the use of everyone, in that ground it would be unethical to prevent the release of GM crops after testing and meeting the regulatory measures (Teferra, 2021). The holding of the release of GM crops to the market prevents the economic benefits that countries can attain through their production. However, a solution has been developed towards such issue is by labelling of the GM crops for the market sales (Delgado-Zegarra et al., 2022). Such labelling’s allow for the consumer’s sovereignty as the people have the fundamental right to know what food they are consuming and about the processes involved in its production (Yeh et al., 2019). Positive information about GM crops needs to be brought into the public in comparison to the negative assumptions for improving their marketability. The surveys conducted on the public opinions in a study indicated that majority of the people in USA supports labelling of GM crops (Wunderlich and Gatto, 2015). According to the Food and Drug Administration (FDA), the labelling of GM crops is not to indicate if they are harmful, but rather to describe the attributes of these crops to the public (Borges et al., 2018). 5 Genome editing in the new era as a promising solution for crop manipulation The scientists have developed advanced molecular tools for the precise modification of plants. Zinc finger nucleases (ZFN) was developed in 2005 with Nicotiana tabacum plants for plant trait improvements (Raza et al., 2022). A ZFN is a synthetic endonuclease that is composed of a designed zinc finger protein (ZFP) joined to the cleavage domain of a restriction enzyme (FokI) (Paschon et al., 2019). It can be redesigned to cleave new targets by creating ZFPs with new selected sequences. The process of cleavage event instigated by the ZFN causes cellular repair processes that in turn mediate efficacious manipulation of the desired locus. Within a passage of five years, transcription activator-like nucleases (TALENS) were developed as a new genome editing technique (Raza et al., 2022). Transcription activator-like effector nucleases (TALENs) introduces specific DNA double-strand breaks (DSBs), as an alternative method to ZFNs for genome editing (Forner et al., 2022). TALENs are identical to ZFNs and consists of a non-specific domain of FokI nuclease fused to a changeable DNA-binding domain. This DNA-binding domain possesses highly conserved repeats acquired from transcription activator-like effectors (TALEs) (Tsuboi et al., 2022). These are proteins synthesized by the bacteria Xanthomonas to prevent the transcription of genes in host plant cells. Although these two techniques have modernized plant genomics, each had its own limitations. However, in 2013 emerged the new editing technique named CRISPR/Cas9 (clustered regularly interspaced short palindromic repeats), which provided the plant breeders a widespread ability to make targeted sequence variations, resulting in rapid improvement of crops (Nekrasov et al., 2013). This technique of genome editing uses site-directed nucleases (SDN) to make exceptionally precise incisions at a particular region of DNA (Metje-Sprink et al., 2019; He and Zhao, 2020). SDN techniques are classified into three categories: SDN-1, SDN-2, and SDN-3 (Lusser et al., 2012). The SDN-1 technique instigates a single or double stranded break to remove a part of the DNA, while SDN-2 technique utilizes a small donor DNA template to induce a desired mutation sequence. The third technique, SDN-3 uses a much longer donor DNA template that is introduced into the target DNA region, which makes this technique similar to traditional recombinant DNA technology (Podevin et al., 2013). 5.1 CRISPR/Cas9 tool for plant genome editing The CRISPR/Cas system is composed of CRISPR repeat-spacer arrays and Cas proteins. It is a bacterial RNA mediated adaptive immune system that safeguards against bacteriophages and other harmful genetic components by breaking the foreign nucleic acid genome (Hu and Li, 2022). The CRISPR system is based on the RNA-guided interference (RNAi) with DNA (Koonin et al., 2017). This system is divided into two classes based on their Cas genes and the type of the interference complex. Class 1 CRISPR/Cas systems utilize multi-complexes of Cas proteins for interference, while class 2 systems use construct having single effector polypeptides with CRISPR RNAs (crRNAs) to perform interference (Hu and Li, 2022). In comparison to TALENS and ZFN, CRISPR system can target multiple sites using several single guide RNA’s (SgRNAs) with a single Cas9 protein expression (Figure 7). This kind of multiplex editing has sophisticated its use in genome engineering and pyramid breeding (Chen et al., 2019). It can create multigene knockouts and knock-ins, chromosomal translocations and deletions (Salsman and Dellaire, 2016). Various approaches have been employed for multiplex guide RNA (gRNA) expression with one cassette in plants. The editing efficiency can be maintained with one promoter to attain consistent expression of each gRNA by placing it into a small vector (Chen et al., 2019). This has been attained by utilizing a polycistronic gene, having interspersion of gRNA within Csy4 recognition sites, transfer RNA sequences, and ribozyme sites, refined in the cell to produce mature gRNAs for modification (Gao and Zhao, 2013; Xie et al., 2015; Cermák et al., 2017). Moreover, the potential of the discovered new generation of CRISPR nuclease termed as Cpf1 that initiates its own crRNA, has been an efficient system for complex genome editing in crops (Wang et al., 2017). FIGURE 7 Figure 7 Illustration of CRISPR CAS9/sgRNA plant genome editing system. The designing of sgRNA is performed using the available online resources for the target gene. The CRISPR complex is formed with target sgRNA and suitable Cas9 variant, which will be cloned into a plant vector for the target plant species transformation with a suitable technique of transformation. The putative transformed plants will be selected after identifying the Cas9 and target sgRNA based on the screening through PCR or RE genotyping and DNA sequencing. The plants with edited genome will be selected and regenerated. 5.2 CRISPR/Cas9 application for SDGs In the year 2015, the SDGs were launched. It consisted of 17 SDGs, with enhanced human health, poverty eradication and improved food security being the three important goals (Aftab et al., 2020). All the SDGs have been set for an achievement date of 2030. The successful achievement of these essential and valuable goals requires substantial adoption of technology and innovation. Advancements in plant breeding have resulted in efficient food production systems since the middle 20th century (Smyth, 2022). With further improvement in the current era of agricultural biotechnology through CRISPR/Cas9 system, crop yield improvements, nutritional enhancements, and reduced environmental impacts are possible (Tripathi et al., 2022). This indicates the potential role of genome editing technologies and highlight their important role in achieving the three essential SDGs. CRISPR/Cas9 technique enhances the sustainability and improves global food security in various ways. Climatic changes accompanied with large variations in temperature affects cropping. The function of genes in temperature stress reactions are essential for developing and breeding temperature tolerating crops. In relation to this, CRISPR/Cas9 system was used to knockout a chilling related gene and a heat responsive factor, tomato C-repeat binding factor 1 (SlCBF1) and Brassinazole Resistant 1 (SlBZR1). It was shown that these genes were firmly associated with temperature tolerance as the altered alleles of SlCBF1and SlBZR1 displayed lessened chilling and heat stress tolerance, respectively (Yin et al., 2018). This application of genome editing can aid towards the SGD-13 and SDG-15, which are to promote more environmentally sustainable agriculture. The building of sustainable environment improves the life of organisms on earth. The progression in enhancing the sustainability of present agricultural systems is vital due to the pressures of climatic changes, clearing of forest lands and utilization of arable lands for non-agricultural activities. Without research focuses on genome editing, decline in yields due to the impacts of climatic changes can severely damage food security. Hence, abiotic stress tolerance can also help towards SDG-2 for reducing hunger through food production under various climatic conditions. Pseudomonas syringae is the causual agent of bacterial speck disease, a major threat to tomato productions (Cai et al., 2011). In an early application, CRISPR/Cas9 was utilized to knockout a positive regulator of downy mildew disease in tomato, which generated tomato mutant alleles ortholog of downy mildew disease resistance in Arabidopsis 6 (DMR6). It was found that the mutant lines showed resistance against P. syringae, spp.Xanthomonas spp. and Phytophthora capsica (Paula et al., 2016). The mutant lines were highly useful resources for breeding tomato plants. In another common biotic stress, susceptibility to Oidium neolycopersici infection was associated to few members of the transmembrane protein Mildew Locus O (MLO). It was identified that among the 16 MLOs in tomato, the profound gene was SlMLO1, and its innate mutants with loss-of function displayed resistance towards powdery mildew disease (Zheng et al., 2016). The mutant strains generated via CRISPR/Cas9 containing homozygous SlMLO1 alleles, 48-bp truncated versions of the wild SlMLO1, exhibited resistance towards O. neolycopersici infection. Similarly, Nekrasov et al. demonstrated that CRISPR/Cas9 derived knockout of MLO provided powdery mildew resistance to tomatoes (Nekrasov et al., 2017). It was also found that the SlMLO1 plants produced through CRISPR/Cas9 technique were devoid of any foreign T-DNA sequence, which made them indistinguishable from natural SlMLO1 mutant plants (Nekrasov et al., 2017). In addition to major bacterial, viral, and fungal diseases, CRISPR/Cas9 was applied for other biotic stresses of oomycete infections. In papaya, Phytophthora palmivora is a devastating agent of oomycete disease. A papaya mutant plant was developed with a functional cysteine protease prohibitor (PpalEPIC8) that resulted in enhanced P. palmivora resistance (Gumtow et al., 2018). Similarly, cocoa beans have been made resistance towards another oomycete pathogen, Phytophthora tropicalis, via the CRISPR/Cas9 system (Fister et al., 2018). Similar to abiotic stress tolerance generating biotic stress tolerance can also lead to SDG-15, as it will lead enhanced living condition for the plants. This will also result in creation of a sound environment for different organisms that depend on plants for their survival and growth. 5.2.3 Crop yield enhancement The genome editing tools are employed primarily for improving crop yield which is a composite characteristic that relies on various components. CRISPR/Cas has been used to knock-out negative regulators that influences yield controlling factors such as grain weight (TaGW2, OsGW5, OsGLW2, or TaGASR7), grain number (OsGn1a), panicle size (OsDEP1, TaDEP1), and tiller number (OsAAP3) for achieving the contemplated traits in plants with loss-of-function alteration in these genes (Li et al., 2016; Li et al., 2016; Zhang et al., 2016; Liu et al., 2017; Zhang et al., 2018a; Lu et al., 2018b). In rice, using CRISPR system, simultaneous knockout of various grain weight related genes (GW2, GW5, and TGW6) led to trait pyramiding that efficiently increased grain weight (Xu et al., 2016). Huang et al. (2018) recently combined CRISPR/Cas9 with pedigree analysis and whole-genome sequencing for the large identification of genes that were responsible for composite quantitative traits, including yield. The study analyzed 30 cultivars of the Green Revolution miracle rice cultivar (IR8) and identified 57 different genes in all high-yielding lines, to be used for gene editing via Cas9 knockout or knockdown system. Phenotypic trait analysis indicated the role of most of these genes in determining yield of rice. It laid insight on yield improvement and facilitated the molecular breeding of improved rice varieties. A high yielding commercial corn was produced by DuPont Pioneer through the CRISPR/Cas9 knockout in waxy corn line (Waltz, 2016a). Genome editing techniques are also used to develop semi-dwarf corn varieties having higher production and low height, in order to lower moisture and nutrient requirements of the corn (Bage et al., 2020). Moreover, in maize, multiple grain yield traits were enhanced by creating fragile promoter alleles of CLE genes, and a null allele of a recently spotted partially redundant recompensing CLE gene, utilizing CRISPR/Cas9 technique. Considerable gene editing research is being undertaken on wheat for increased yield, seed sizes, and seed weight (Li et al., 2021b). Although the future of plant genome editing remains uncertain in Europe, researchers of Vlaams Instituut voor Biotechnologie (VIB) of Belgium have currently applied to undertake three genetically edited corn varieties for field trials that have higher yields and enhanced digestibility (VIB, 2022). Most of the genome edited crops for yield enhancement will lead to increased farm and household revenues. This results in reducing poverty. Although few studies are conducted to date on this discrete goal measurement. It has been reported in one study that the acquisition of Bt cotton developed through transgenesis approach in India, has increased the income by 134% for farmers living with less than 2 USD/day (Subramanian and Qaim, 2010). This was mainly due to improved yields and lowered inputs costs. The potential of genome editing for yield increases indicates that, similar to GM crop adoption, genome edited crops can also improve the incomes of the farmers. The early evidence related to possible increase in the household income due to yield increases, indicates that genome editing makes significant contributions to SDG-1for eradicating poverty. The significant genome editing studies for increasing the yield of major food staple crops and other essential crops indicate the substantial potential of GMs in contributing towards SDG-2, which aims to end hunger and achieve food security. 5.2.4 Quality improvement The quality of crops may differ depending on the various breeding techniques used. The genome editing has impacted several quality traits such as nutrition, fragrance, starch content and storage quality of crops. Using CRISPR/Cas9, the knockout of Waxy, resulted in enhanced rice eating and cooking quality with low amylose content (Zhang et al., 2018b). Resistant starch rich varieties with elevated amylose were developed by altering the starch connecting enzyme gene, SBEIIb, by CRISPR/Cas9. Consuming food with increased amylose content is essential for the patients with diet-related to noninfectious chronic diseases (Sun et al., 2017). Another important quality for the commercial and edible rice varieties is the fragrance. The biosynthesis of a major rice fragrant compound, 2-acetyl-1-pyrroline is due to a variation in the betaine aldehyde dehydrogenase 2 (BADH2) gene. With TALEN genome editing tool, specific alteration of OsBADH2 resulted in a fragrant rice variety with low 2-acetyl-1-pyrroline content identical to the innate fragrant rice variant (Shan et al., 2015). In Western countries, celiac disease is triggered due to cereal crops Gluten protein in more than 7% of individuals . Wheat plant consists of nearly 100 genes or pseudogenes, α-gliadin gene family, for gluten-encoding. CRISPR/Cas9 system allows for newer pathways to modify traits governed by massive gene families with unessential properties. At present, researchers have created low-gluten wheat by simultaneous knockout of most conserved domains of α-gliadin family (Sanchez-Leon et al., 2018). Furthermore, other high-quality plants produced by CRISPR/Cas9 includes Camelina sativa (Jiang et al., 2017) and Brassica napus (Okuzaki et al., 2018) plants with high oleic acid oil seeds, long shelf-life varieties of tomato (Li et al., 2018a), enhanced lycopene tomatoes (Li et al., 2018) or γ-aminobutyric acid content in tomatoes (Li et al., 2018b), and resulted in low levels of toxic steroidal glycoalkaloids in potatoes (Nakayasu et al., 2018). The increased lycopene production acts as an antioxidant for lowering the risk of cancer and heart diseases (Zaraska, 2022). Recently in UK, Rothamsted Research has received acceptance for field trials of genetically edited wheat that synthesizes lower asparagine, a potential cancer producing compound in the toasted breads (Case, 2021). Genome editing applications surrounding the quality improvement has the prospective to make considerable contributions to SDG-3. Quality improvements in crops promote human health and well-being. Moreover, the capability of genome editing in producing food that may avert specific diseases are directly associated with beneficial health implications. 5.2.5 Nutritional enhancement One of the applications of genome editing is to enhance the nutritional metabolism and decrease the undesirable substances from the crops through gene expression regulation. In 2021, Japan launched the first genome-edited tomato Sicilian Rouge High GABA (gamma-aminobutyric acid). The edited variety has around four to five times higher amount of GABA than the ordinary tomatoes. The increase was the result of CRISPR/Cas9 genome editing that targeted the autoinhibitory domain (AID) of GAD3 on the C-terminal side, an enzyme involved for the GABA biosynthesis (Nonaka et al., 2017). A frameshift mutation was induced in this autoinhibitory domain that caused the early termination of translation, and the excision of autoinhibitory domain of GAD3 (Nonaka et al., 2017). This strategy eliminated the inhibitors of GAD3 and increased the enzymatic activity involved in the GABA biosynthesis, whose activity is generally suppressed without manipulating the expression level of GAD3. Furthermore, CRISPR/Cas9 system was also utilized to improve the total wheat protein content with enhanced grain weight with the knockout of GW2 gene that encodes for a RING-type E3 ubiquitin ligase, known to govern the cell numbers of spikelet hulls (Zhang et al., 2018a). Moreover, genome editing was applied to lettuce that has produced a new variant synthesizing enhanced levels of thiamine, β-carotene, and vitamin C (Southy, 2022). Research is additionally focused on enhancing corn vitamin A content and provitamin A (Maqbool et al., 2018; Xiao et al., 2020). In the US, a genome editing study was targeted on increasing wheat fiber content. The research is underway for the field trials of this new enhanced fiber wheat (Knisley, 2021). Ensuring sufficient nutrient content in human diets enhances life-long health benefits and prevents the debilitating diseases. The genome editing tools promising results in broad applications of nutritional enhancements is essential for food insecure developing countries. With this application, the genetic editing underpins the other portions of SDG-2 and SDG-3, which are to achieve and consume fortified nutritional food. 5.2.6 Enhancing hybrid breeding Hybrid breeding is an appropriate method for enhancing crop productivity. A male-sterile maternal line is essential for producing an improved-quality hybrid variety. Through CRISPR/Cas9 technique, tremendous progress has been made to produce male-sterile lines, which includes photosensitive genic male-sterile rice (Li et al., 2016), heat sensitive male-sterile lines in rice (Zhou et al., 2016), wheat (Singh et al., 2018), and corn (Li et al., 2017a). Heterosis in breeding faces hybrid sterility as an obstacle. Reproductive barriers were disrupted in hybrids between japonica and indica, SaF/SaM (sterility locus Sa) (Xie et al., 2017a) and African rice (Oryza glaberrima Steud) OgTPR1 (sterility at the S1 locus) (Xie et al., 2017b). It was found that knockout of the indica Sc gene in the allele Sc-I protected the male fertility in japonica-indica hybrids (Shen et al., 2017). Likewise, it was shown that knockout of the toxin gene ORF2, improved the fertility of japonica-indica hybrids (Yu et al., 2018). Furthermore, in rice plants, genetic editing was utilized to replace mitosis for meiosis, through the knockout of three important meiotic genes, PAIR1, OSD1, and REC8 (Mieulet et al., 2016). Moreover, simultaneous activation of BBM1 in egg cells or knockout of MTL, by two independent research groups resulted in asexual propagation lines that fixed the hybrid heterozygosity through seed propagation (Khanday et al., 2018; Wang et al., 2019a). In addition, gene editing is also a constructive method for enhancing haploid breeding (Yao et al., 2018), shortening growth periods (Li et al., 2017b), improving resistance to silique shatter (Braatz et al., 2017), and countering the self-incompatibility of diploid potatoes (Ye et al., 2018), that meets the requirements of breeders. The enhanced breeding of hybrid plants results in the developing of novel plant varieties that supports the SDG-15, enhancing life on land through diverse plant species. Therefore, the successful application of genome editing technologies have modified and improved many essential traits in diverse crops for the achievement of different SDGs (Table 2). TABLE 2 Table 2 Overview of the recent CRISPR/Cas9 applications for the SDGs. 5.3 Regulatory concerns of crop genome editing The recent developments in biotechnology in the form of genome editing has made it viable for food products to get into the market quicker in a feasible rate. The latest genome editing tools are essential for the future production of crops. This is due to their robustness, process precision and timely regulation in comparison to conventional GM crops. Several products are now developed through CRISPR/Cas9 system that are not considered as GMO in several countries. It was stated by the US Department of Agriculture (USDA) that the crops edited via CRISPR/Cas9 platform can be grown and marketed without regulatory processes and risk assessments that are mandatory on GMOs biosafety regulations (Waltz, 2016b). Such a step will save millions of dollars spent on investigating GM crops through field tests and data collections, reduces the time required for introduction of improved crop varieties into the market, and removes the uncertainty associated with the consumption of GM crops within the public. To date, five crops developed through CRISPR/Cas9 system were accepted by the USDA without the regulatory measures of GMOs. These includes browning-resistant mushrooms, created through CRISPR/Cas9 technique by the knockout of polyphenol oxidase (PPO) gene (Waltz, 2016b). Likewise, waxy corn plants with enhanced amylopectin have been developed by CRISPR/Cas9 system with the inactivation of an endogenous waxy gene (Wx1) and introduced without regulations (Waltz, 2016a). Setaria viridis with delayed flowering period was attained through the deactivation of the S. viridis homolog of the corn ID1 gene (Jaganathan et al., 2018), camelina altered for improved oil content (Waltz, 2018), and soybean with modified Drb2a and Drb2b for drought tolerant, were not subjected to GMO regulatory measures (Cai et al., 2015; Kumar et al., 2020). 6 Perspectives on the criticisms of GM crops incorporation into sustainable food production systems Agriculture plays an important role towards the SDGs achievement, such as for reducing hunger and malnutrition, alleviating poverty, implementing a sustainable production and consumption system, countering climatic changes, ensuring gender equality, improving energy use, and maintaining healthy ecosystem services (Viana et al., 2022). It acts as a basis for economic development in several countries. Global agriculture has successfully provided sufficient food for meeting the rising demand and varied consumption patterns of humans over the recent decades (da Costa et al., 2022). This has been possible largely due to the agricultural intensification at the expense of environmental resource degradation, biodiversity loss, harmful gas emissions, and land clearing (Liu et al., 2022). However, it has been shown that the advancement of the biotechnological tools for genetic modification of crops will allow agricultural practices to achieve SDGs in a sustainable manner. Nonetheless, GM crops faces the moral and ethical dilemma of their incorporation into the sustainable agricultural practices, which can be negotiated through the appropriate balance of benefits and negative impacts of GM crops by encompassing all the three relational aspects of sustainability, such as the environment, society, and the economy (Figure 8). FIGURE 8 Figure 8 Production of GM crops operationalizes the three themes of sustainability, environment (efficient use of resources and preservation of biodiversity), society (freedom of choice and livelihood), and economy (national income improvement and financial risk reduction). 6.1 GM crops for sustainable environment GM crops are scrutinized for the environmental safety. More than 300 million EUR were invested by the European Union (EU) in 130 research projects. It covered a research period of more than 25 years specifically to reach at the interpretation that GM crops are not riskier compared to conventional bred plants (European Commission, 2010). In fact, GM crops that were developed for input traits such as insect resistance and herbicide tolerance have resulted in a reduction to agriculture’s environmental footprint by enhancing sustainable farming practices (Brookes and Barfoot, 2015). Moreover, the genetic modification of crops is a logical continuation of selective plant breeding that humans have developed for thousands of years. It results in the conservation of environment and the plant biodiversity allowing their incorporation into sustainable food production systems. Klumper and Qaim undertook a meta-analysis of the initial data obtained from the farm surveys or field trials in various parts of the world (Klümper and Qaim, 2014). It indicated that the insect resistance of GM crops has lowered the pesticide application by 36.9%. In addition, GM seeds contribute towards the adoption of conservation tillage, which are sown straight into the fields without early ploughing. This practice conserves the essential soil microorganisms, preserves soil moisture, and maintains carbon in the soil. A meta-analysis was performed by Abdalla et al. (2016) to compare the CO2 emissions of entire season from the tilled and untilled soils. It was found that on average, 21% more carbon was emitted from the tilled soils than the untilled soils. Furthermore, the use of powered agricultural machines was lowered due to less pesticide use and no/less field ploughing. This provides indirect benefits to sustainable agriculture by conserving fossil fuels and decreasing the emission of CO2 into the atmosphere. In the United States, the land area for the soybean production increased by approximately 5 million hectares between 1996 and 2009, and 65% of those fields were that of no-tillage practices field due to the adoption of GM soybeans (Brookes and Barfoot, 2016). This resulted in a decline of fuel utilization of 11.8%, from 28.7 to 25.3 liters per hectare, and an approximate reduction of greenhouse gas emissions of more than 2 Gt between 1996 and 2009. The genetically modified soybean fields presented similar impacts of reduction in greenhouse gas emission in various countries such as, Uruguay, Argentina and Paraguay (Brookes and Barfoot, 2016). The cultivation of GM crops has also increased the biodiversity of non-target beneficial insects due to the lack of chemical use in the fields for the control of harmful insects (Karalis et al., 2020; Talakayala et al., 2020). The pest resistant traits of GM crops allow the restoration of crop species that were discontinued due to harmful insect pressure. In addition, it improved the crops adaption to several environmental conditions, allowing for a diversified production practice (Anderson et al., 2019). Despite the argument that GM crops threaten biodiversity, it is found that different practices of agriculture affect the biodiversity, and the GM crops do not broaden this threat. Agriculture causes significant clearance of natural habitat for the food production (Mrówczyńska-Kamińska et al., 2021). However, it was indicated that the high yields of GM crops were achieved at lower land areas (Burney et al., 2010). The improved productivity also reduces the pressure of converting additional land for agriculture (Bouët and Gruère, 2011). Genetic modification reduces habitat destruction, which is a common practice of intensive farming that poses a large threat to biodiversity. For instances, without the use of GM crops, an additional 22.4 Mha would have been needed for maintaining the global production at 2016 levels (Brookes and Barfoot, 2018). GM crops are considered as unique species that pose a threat through movement of their genes (Raman, 2017). At present there are no scientific manifestation of hazards associated with the transfer of genes between unrelated organisms developed through genetic alterations. Different scientific corporations such as the U.S. National Academy of Sciences, World Health Organization (WHO), and the British Royal Society have stated that consumption of GM foods is not as harmful as consuming the same foods that were modified using conventional crop improvement techniques. Therefore, the GM crops cannot be prevented for use in sustainable food production systems. 6.2 GM crops for sustainable society The adoption of GM crops has significant health benefits. It reduces the exposure to harmful chemical pesticides that are used with non-GM crops (Smyth, 2020b). Two decades analysis of GM corn consumption by Pellegrino et al. (2018) indicated that it posed no threat to the health of human or livestock. It showed a substantive positive impact on health due to the presence of lower mycotoxins in crops (Pellegrino et al., 2018). The emergence of new genetic modification technologies enabled the production of crop varieties with enhanced flavors and reduced allergens (Mathur et al., 2015). Moreover, the prospective production of edible vaccine in GM crops can result in low-cost vaccine production and allow for their accessibility to a larger section of the society. The pre-testing for the safety of GM crops in several areas has indicated no evidence of any adverse reactions (Kamle et al., 2017). Although the negative health consequences of GM crops consumption are reported on rats, analyses of most of the studies about the safety of GM crops, indicated no human health consequences (Szymczyk et al., 2018; Giraldo et al., 2019). The sustainable food production systems need to ensure food security for the growing population. Since most of the countries depend on the food imports for their supply’s due to the climatic constraints and the insect pests, food security appears difficult (Xiao et al., 2020). However, GM crops climatic stress tolerance and higher yields will ease the process of achieving food security (Evanega et al., 2022; Keiper and Atanassova, 2022). Therefore, including the GM crops into the sustainable food production systems will enable different communities to produce their own food. Moreover, the GM crops are developed with improved shelf-life that can be stored for longer periods without wastage. Such practices appeal to the ethical principles of beneficence and justice, which means to have fair and equitable food supply that will benefit the larger society (Smyth, 2020a; MatouskovaVanderberg, 2022; Vega Rodríguez et al., 2022). The genetic modification of crops further provides the different nutrients required for healthy human living. Kettenburg et al. (2018) depicted the evidence of health gains from the Bt maize crops and Golden Rice that produces Vitamin A for human beings. It has been reported that around 1 million children die annually due to the Vitamin A deficiency (Swamy et al., 2019). Therefore, the production of Golden Rice plays an important role in preventing these deaths of children. Hence, the introduction of GM crops can save human lives. The potential risks of GM crops that are not proven remain insignificant for people who are starving or having severe nutrient deficiencies (Vega Rodríguez et al., 2022). People with life threatening disease deploy themselves to experimental drugs, which is considered ethical after a consent, the same could be applied to the GM crops. The experts from governmental and non-governmental agencies in some of the developing countries have increasingly included the GM crops into the wider approaches of sustainability (Hartline-Grafton and Hassink, 2021). However, there are certain people within different communities who still resist the GM crops because of the personal and religious beliefs (Bawa and Anilakumar, 2013). It includes the concern over the right to “play God”, as well as the introduction of any foreign gene into crops that are abstained for religious reasons (Omobowale et al., 2009). It is believed that it is intrinsically wrong to tamper with nature, and others consider inserting new genes into plant genome as unethical (Daunert et al., 2008). However, such an issue can be addressed through genome editing techniques and with the contrasting view that the genetic modification is simply one more step in the processes of modification of the physical world. It is similar to the manufacturing of novel chemicals in industries and to natural breeding of plants and animals (Yang et al., 2022). As people are having a choice to use different novel chemicals, similarly a right to choose can be developed for the GM crops consumption. Moreover, the science and technology have advanced humans in putting adequate measures to evaluate and monitor scientific innovations to prevent potential risks to the society (John and Babu, 2021). Therefore, it is of support to use GM crops in sustainable food production systems, as the development of GM crops is identical to any another scientific invention. 6.3 GM crops for sustainable economy The economical aspect of GM crops faces the issue of intellectual property rights (Delgado-Zegarra et al., 2022). The producers of GM crops have used terminator technology to protect their seeds and reduce the gene flow. The seeds and pollens of these crops are made sterile (Turnbull et al., 2021). After the completion of harvest, farmers have to re-purchase the seeds from the seed producers. It has been argued that such a technique provides seed companies more control over what the farmers should grow, and it is considered to be unethical by the society (Delgado-Zegarra et al., 2022). But, from an innovation perspective, it is ethical to protect intellectual properties, because these seeds are the obtention of biotechnological companies, they need to have the same intellectual property protection rights as any other potential product, such as the protection of a new software developed by an IT company (Muehlfeld and Wang, 2022). However, it is due to the negative publicity of GM crops that they are held back by the public. There are also very few farmers that depend on second-generation seeds. Hence, the introduction of sterile seeds does not affect the famer’s seed choices (Addae-Frimpomaah et al., 2022). Many of the GM seed manufacturers developed a solution towards sterile seeds through the creation of seed contract with the farmers. The seed contract is an agreement that states that the GM seeds are sterile and are used by farmers on their own choice and that the seeds shouldn’t be distributed for any other purposes. This has resulted in economic benefits to the seed producers in an ethical way through the farmer contract agreement. The farmers have also benefited economically with the adoption of GM crops (Raman, 2017). With the introduction of GM crops, there would be a major increase in the farmer’s production efficiency which in turn results in higher revenue (Oliver, 2014). Since the GM crops are made resistant to pests, the cost spent on chemical pesticides will decline, as less chemicals will be required for the GM crops (Buiatti et al., 2013). Furthermore, the use of farm machineries declines as well due to no-tillage practices with GM crops that reduces the cost spent on the fuel of machineries. In addition to this, the land cost for the growers can decline using GM crops as these crops produce high yield in small spaces. Moreover, the poor farmers are mostly engaged in subsistence farming, but the adoption of GM crops would enable such farmers to market their products due to surplus yields from GM crops (Azadi et al., 2016), which would improve their quality of life within the society (Lucht, 2015). The farmers have well-adopted GM crops into the food production systems. Since mid-1990s, GM crops were planted by 18 million farmers (ISAAA, 2017). The track records indicated logistical and economic advantages to the farmers. A net economic benefit of USD 186.1 billion within the twenty-one years of GM crop use was recorded in various farms. It was found by Brookes and Barfoot (2018) that 52% of these benefits were reaped by farmers from developing country. The majority of these gains (65%) were mainly due to yield and productivity increases, while the remaining (35%) resulted from the cost savings. 7 Conclusion The practice of sustainable agriculture has become challenging due to the changes in climate, the rising population, and shrinkage of arable lands. There is a need to develop modified crops having higher productivity, quality, and tolerance to various biotic and abiotic stresses. The genetic modification of crops has enabled the development of efficient production systems that provided substantial benefits to the producers and the community based on the three principles of sustainable agriculture such as protecting environment, enhancing human health, and improving the economy. Even when there are strict assessments of environmental and health safety, and these crops are granted regulatory approval, concerns are still raised over the involvement of the genetic modification tools and their long-term unknown disadvantages on environment and health. The potential negative consequences of GM crops have caused to their lesser implementation in various countries. To overcome and address some of these concerns, new advanced alternative molecular techniques are developed, such as genome editing, particularly CRISPR/Cas9 system that improves crop traits without introducing foreign genes. The expansion of plant breeding to genetic modification through genome editing would further produce more per unit of land that makes them essential in achieving the SDGs, especially for eradicating hunger, improving food security and human health. The present review indicates that it would be imprudent to dismiss GM crops as a tool for meeting the goals of sustainable development. With the increasing global challenges, GM crops can help humanity. However, it is imperative that the scientific community and agricultural industries invest in better communications and regulations to counter the misinformation and unethical research associated with GM crops. Moreover, this review suggests that GM crops can be broadly adopted by improving the already present regulations, efficient monitoring, and practice implementation through government agriculture bodies. In addition, developing a global risk alleviation strategy and communication with growers, will ensure a substantial acceptance and adoption of GM crops in several countries to bring global profitability and productivity. Finally, the sustainability of GM crops should be determined based on their role in sustainable agriculture and human development within the next 30 years. It is not only GM crops that pose certain risks and concerns, but all the methods of food production are associated with some drawbacks. However, the use of genome editing tools and regulation of GM crops ensure that these crops are as safe as conventionally bred crops and can act as the drivers of sustainable food security. Author contributions KM and MA conceptualized and wrote the original manuscript. KM, MA, FB, and HR. reviewed and edited the draft paper. All the authors have read and approved to the submitted version of the manuscript. Funding This research work was supported by funding from the United Arab Emirates University, the Research Office to KM under grant number 31R203. Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher’s note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. Buiatti, M., Christou, P., Pastore, G. (2013). The application of GMOs in agriculture and in food production for a better nutrition: Two different scientific points of view. Genes Nutr. 8 (3), 255–270. doi: 10.1007/s12263-012-0316-4
The improved productivity also reduces the pressure of converting additional land for agriculture (Bouët and Gruère, 2011). Genetic modification reduces habitat destruction, which is a common practice of intensive farming that poses a large threat to biodiversity. For instances, without the use of GM crops, an additional 22.4 Mha would have been needed for maintaining the global production at 2016 levels (Brookes and Barfoot, 2018). GM crops are considered as unique species that pose a threat through movement of their genes (Raman, 2017). At present there are no scientific manifestation of hazards associated with the transfer of genes between unrelated organisms developed through genetic alterations. Different scientific corporations such as the U.S. National Academy of Sciences, World Health Organization (WHO), and the British Royal Society have stated that consumption of GM foods is not as harmful as consuming the same foods that were modified using conventional crop improvement techniques. Therefore, the GM crops cannot be prevented for use in sustainable food production systems. 6.2 GM crops for sustainable society The adoption of GM crops has significant health benefits. It reduces the exposure to harmful chemical pesticides that are used with non-GM crops (Smyth, 2020b). Two decades analysis of GM corn consumption by Pellegrino et al. (2018) indicated that it posed no threat to the health of human or livestock. It showed a substantive positive impact on health due to the presence of lower mycotoxins in crops (Pellegrino et al., 2018). The emergence of new genetic modification technologies enabled the production of crop varieties with enhanced flavors and reduced allergens (Mathur et al., 2015). Moreover, the prospective production of edible vaccine in GM crops can result in low-cost vaccine production and allow for their accessibility to a larger section of the society.
yes
Data Privacy
Can Internet Service Providers sell user data without consent?
yes_statement
"internet" service providers can "sell" "user" "data" without "consent".. "user" "data" can be sold by "internet" service providers without "consent".
https://www.consumerreports.org/consumerist/house-votes-to-allow-internet-service-providers-to-sell-share-your-personal-information/
House Votes To Allow Internet Service Providers To Sell, Share ...
House Votes To Allow Internet Service Providers To Sell, Share Your Personal Information by Chris Morran Last updated: March 28, 2017 0 SHARES Sharing is Nice Yes, send me a copy of this email. We respect your privacy. All email addresses you provide will be used just for sending this story. Thanks for sharing. Oops, we messed up. Try again later The new Federal Communications Commission’s rules intended to limit how companies like AT&T, Comcast, Verizon, and Charter can use internet customers’ sensitive personal information are effectively dead in the water, thanks to a House of Representatives vote today to kill the regulations, making sure internet service providers can use and sell user data. The final vote was 215 to repeal the privacy rules with 205 votes to keep them in place. Voting was mostly along party lines, though 15 Republicans broke rank to vote against the resolution. No Democrats voted in its favor. The GOP lawmakers that voted against the resolution were Justin Amash (MI), Mo Brooks (AL), Mike Coffman (CO), Warren Davidson (OH), John Duncan (TN), John Faso (NY), Garret Graves (LA), Jaime Herrera Beutler (WA), Walter Jones (NC), Tom McClintock (CA), David Reichert (WA), Mark Sanford (SC), Elise Stefanik (NY), Kevin Yoder (KS), Lee Zeldin (NY). The Senate has already approved this resolution, meaning it only awaits the signature of President Trump to undo the FCC regulations. The rules, finalized in October by the FCC, effectively divide the data that your ISP has about you and your browsing habits into two categories. The first category is sensitive data. ISPs would have been prevented from using the following information without your permission: • Geographic location • Children’s information • Health information • Financial information • Social Security numbers • Web browsing history • App usage history • The content of communications The second category includes less-sensitive, but still personal data. ISPs would have been allowed to use this information, but would have been required to allow users the opportunity to opt out of having the following shared: • Your name • Your address • Your IP address • Your current subscription level • Anything else not in the “opt in” bucket. The rules were immediately opposed by ISPs and their lobbyists, who said the regulations were unfair because they did not place the same restriction on content companies Google and Netflix — while glossing over the fact that the FCC has no authority to regulate what Google and Netflix do with their user information. Republican lawmakers are using the Congressional Review Act to roll back this regulation. The CRA allows lawmakers to issue resolutions of disapproval on new, major regulations. For a CRA resolution to be enacted, it must be passed by a majority in both the House and Senate, then signed by the President. Until the Trump administration, this law had only been used successfully once in its 20-year history. Congress has already passed more than ten CRA resolutions on to the White House in just the last couple of months. President Trump is expected to sign the resolution killing the internet privacy rule. Despite the overwhelming GOP vote in favor of this resolution, Rep. Michael Burgess (TX) was one of the few Republicans on hand during the early afternoon session to argue for rolling back the FCC privacy rules. Burgess referred to these regulations as “duplicative” and twice read directly from the website of the Federal Trade Commission, noting that the FTC has long been the consumer privacy enforcer for the federal government — while failing to recognize that the recent reclassification of broadband as a “common carrier” piece of vital telecommunications puts it fully under the FCC’s regulatory umbrella, meaning the FTC can’t enforce privacy regulations on broadband providers. In fact, the FTC Act has an explicit exception for common carriers, meaning it has no legal authority to regulate broadband providers. We know this because AT&T successfully used this exception to wriggle out of an FTC lawsuit in 2016. On the Democratic side of the early afternoon’s debate, no one was more vehemently against the CRA resolution than Rep. Michael Capuano of Massachusetts. “What the heck are you thinking?” Capuano hollered to a mostly empty GOP side of the hall. “Give me one good reason why Comcast should know what my mother’s medical problems are?… Just last week I bought underwear on the internet. Why should you know what size I take? Or the color, Or any of that information?” Capuano challenged Burgess to go out and find three people on the street who actually agreed with the assertion that Comcast, Verizon, et al, actually need this data. He said that Rep. Burgess was right to be concerned about the un-level playing field between the ISP privacy rules and those governing Google and Netflix, but countered that “You don’t level the playing field by lowering it.” Rep. Jared Polis (CO) pointed out that using the CRA to roll back the privacy rules was a nuclear option that could prevent the FCC from ever crafting meaningful privacy regulations. If the FTC decides to bolster consumer privacy protections for online content companies, the FCC might not be able to follow suit, as the CRA prevents regulatory agencies from drafting new rules that are overly similar to CRA-repealed regulations. “I don’t anyone to take my information and make money off of it just because they can get their mitts on it,” added Rep. Anna Eshoo (CA). “Who do you go to complain to? No one, because there is nothing left to enforce.” Rep. Marsha Blackburn (TN) — who has previously tried to kill the FCC’s net neutrality rules, and who just happened to have received nearly $80,000 from AT&T, Comcast, Verizon, and telecom lobbyists — argued that the FCC can still handle privacy issues on a “case by case” basis, and that the free market will prevent ISPs from going too far in exploiting customer data. Rep. Bob Latta (OH), a supporter of the repeal resolution (and a recipient of more than $60,000 in campaign contributions from companies affected by the rules), recommended legislation that would clarify that the FTC has authority to regulate ISPs’ privacy matters. In response, Rep. Mike Doyle (PA) suggested that maybe the GOP should have gone that route rather than using the CRA to prevent the FCC from ever issuing meaningful privacy guidelines. Regarding the argument that the FTC is doing a good job of regulating consumer privacy issues, Capuano pointed to the CloudPets doll and other toys that allegedly collected user data, or Vizio TVs that watched viewers back. Why would consumers want to put ISPs under such lax privacy controls, asked the Congressman. FCC Chairman Ajit Pai, who vehemently opposed the privacy rules when they were approved, said today that the FCC will “work with the FTC to ensure that consumers’ online privacy is protected though a consistent and comprehensive framework. In my view, the best way to achieve that result would be to return jurisdiction over broadband providers’ privacy practices to the FTC, with its decades of experience and expertise in this area.” Former FCC Commissioner Michael Copps has a different take. “Big Cable and Big Telecom have struck again,” said Copps, now an adviser to Common Cause. “By doing the industry’s bidding, the congressional majority is wiping away common sense protections for the privacy of internet users’ personal data and browsing history. If this bill is signed by the president, broadband providers will have free rein to sell user data to the highest bidder – without ever informing consumers.” Jonathan Schwantes, senior policy counsel for our colleagues at Consumers Union, criticized lawmakers for rushing to roll back these regulations. “In a matter of four legislative days, Congress has wiped out groundbreaking privacy rules, carefully designed over 200 days, intended to empower consumers and protect their privacy,” said Schwantes. “The only winners today are internet service providers, mega-corporations like AT&T and Comcast, who have been strong-arming Congress since the day these rules passed last October.” 5 Things ISPs Have Done, And Could Do, Without These Rules The Electronic Frontier Foundation notes several ways that ISPs have abused consumer data, and would be free to continue abusing in the absence of the FCC rules: 1. Selling your data to marketers: Some ISPs are already doing this, but the now endangered rules would have prevented them from sharing most user information without consent. 2. Hijacking your searches: Your ISP could intercept your search query and direct you straight to sites that have paid for traffic to certain search terms. A number of ISPs tried doing this in 2011. 3. Inserting ads: ISPs can — and have — monitor your browsing habits and inject ads on top of the ones you’re already seeing from the websites you go to. 4. Using software on your phone to record every URL you visit: AT&T, Sprint, and T-Mobile have all previously used pre-installed software to track every site and app used by its wireless users. 5. Injecting undetectable, undeletable tracking cookies: Verizon was caught using “supercookies” on all of its mobile customers’ phones, tracking every piece of web data from the phone. Verizon stopped the practice, but EFF argues it could return. Editor's Note: This article originally appeared on Consumerist. 0 SHARES Sharing is Nice Yes, send me a copy of this email. We respect your privacy. All email addresses you provide will be used just for sending this story.
If this bill is signed by the president, broadband providers will have free rein to sell user data to the highest bidder – without ever informing consumers.” Jonathan Schwantes, senior policy counsel for our colleagues at Consumers Union, criticized lawmakers for rushing to roll back these regulations. “In a matter of four legislative days, Congress has wiped out groundbreaking privacy rules, carefully designed over 200 days, intended to empower consumers and protect their privacy,” said Schwantes. “The only winners today are internet service providers, mega-corporations like AT&T and Comcast, who have been strong-arming Congress since the day these rules passed last October.” 5 Things ISPs Have Done, And Could Do, Without These Rules The Electronic Frontier Foundation notes several ways that ISPs have abused consumer data, and would be free to continue abusing in the absence of the FCC rules: 1. Selling your data to marketers: Some ISPs are already doing this, but the now endangered rules would have prevented them from sharing most user information without consent. 2. Hijacking your searches: Your ISP could intercept your search query and direct you straight to sites that have paid for traffic to certain search terms. A number of ISPs tried doing this in 2011. 3. Inserting ads: ISPs can — and have — monitor your browsing habits and inject ads on top of the ones you’re already seeing from the websites you go to. 4. Using software on your phone to record every URL you visit: AT&T, Sprint, and T-Mobile have all previously used pre-installed software to track every site and app used by its wireless users. 5. Injecting undetectable, undeletable tracking cookies: Verizon was caught using “supercookies” on all of its mobile customers’ phones, tracking every piece of web data from the phone. Verizon stopped the practice, but EFF argues it could return. Editor's Note: This article originally appeared on Consumerist.
yes
Data Privacy
Can Internet Service Providers sell user data without consent?
yes_statement
"internet" service providers can "sell" "user" "data" without "consent".. "user" "data" can be sold by "internet" service providers without "consent".
https://www.cbc.ca/news/science/us-fcc-internet-privacy-legislation-marketing-ads-canada-1.4046512
U.S. internet service providers get green light to sell user data — but ...
Canada's privacy commissioner and the CRTC have made decisions in recent years that effectively limit the information internet service providers can collect and use for secondary purposes, such as marketing, without your consent. (Issei Kato/Reuters) Privacy protections designed to prevent U.S. internet service providers from sharing or selling subscribers' personal information with third parties — without permission — were dismantled by U.S. Congress on Tuesday. It means that information about the apps American internet subscribers use, the websites they visit, and the things they purchase online — among other things — can potentially be tracked, shared, and monetized by third parties, unless those users opt out. You might be pleased to learn that Canada, which often follows the U.S. lead on technology issues, has taken a different approach. Here, internet service providers can only share your personal information with third parties with your express consent. Tamir Israel, a staff lawyer at the Canadian Internet Policy and Public Interest Clinic, says you have the privacy commissioner of Canada and the CRTC to thank. Both organizations have released decisions in recent years that effectively limit the information internet service providers can collect and use for secondary purposes, such as marketing, without your consent. Pitfalls of relevant ads In 2013, the privacy commissioner launched an investigation into a new Bell initiative called the "relevant advertising program." The Canadian telco used network usage information, as well as account and demographic information, to build advertising profiles that could be used by third parties to target specific audiences with ads. In other words, advertisers could target Bell users that visited certain websites. Browsing history or frequently used apps could also be used to infer users' interests. Users could be further targeted by age, phone model or credit score. Bell also indicated that it might use home internet usage, television viewing history and calling patterns to build ad profiles in the future. This sort of thing is fine — but only if customers opt in, or choose to allow their personal information to be used in this way. In this case, however, Bell designed the relevant advertising program to be opt-out, the default for Bell users unless they said otherwise. This is the current reality for internet users in the U.S. Marketers and advertisers are especially interested in the data they can glean from U.S. internet service provider usage data, which can reveal much about a person's habits and interests. (Mike Segar/Reuters) "Bell should not simply assume that, unless they proactively speak up to the contrary, customers are consenting to have their personal information used in this new way," Privacy Commissioner Daniel Therrien said at the time, recommending that Bell make its program opt-in. By combining a user's personal information with their usage information, "they kind of crossed a line in what they proposed they wanted to do," said David Fraser, a partner at the law firm McInnes Cooper, who specializes in privacy issues. "If any other telco was looking at doing that before, they've mostly changed their mind." Although the review was not specifically focused on marketing or ads, the CRTC said in its decision it was taking steps "to ensure that personal information collected for the purpose of managing internet traffic is not used for other purposes and is not disclosed." Bell ultimately chose to close its old marketing program, but it now has a new program — one that, following the privacy commissioner's recommendation, is opt-in. So there's no data sharing at all? Even though Canadian ISPs can't share personal information with third parties without your consent, it doesn't mean they're not sharing any data at all. Rogers, Bell and Telus, for example, say they may share de-identified information — data that has been stripped of personal information — with third parties, without your consent. This may be done for "research, planning, or product and service development," according to Telus, while Bell says it may be done "to provide social benefits (such as assisting municipalities with traffic planning) and to develop analytic marketing reports for our use and for the use of our partners." What's the problem with that? Researchers have shown that, in some cases, users can be re-identified when de-identified data is combined with other sources of data. It's enough of a concern that some companies explicitly forbid re-identification in their terms of use. But by and large, Fraser sees the collection of de-identified data as much less of a concern than other types of data. "It's aggregate information," he said. On its own, "it really doesn't tell you anything about any individual." Of course, knowing things about individuals is exactly what marketers want from ISPs. In Canada, they'll have to keep waiting. In the U.S., not so much. ABOUT THE AUTHOR Matthew Braga is the senior technology reporter for CBC News, where he covers stories about how data is collected, used, and shared. You can contact him via email at matthew.braga@cbc.ca. For particularly sensitive messages or documents, consider using Secure Drop, an anonymous, confidential system for sharing encrypted information with CBC News.
(Mike Segar/Reuters) "Bell should not simply assume that, unless they proactively speak up to the contrary, customers are consenting to have their personal information used in this new way," Privacy Commissioner Daniel Therrien said at the time, recommending that Bell make its program opt-in. By combining a user's personal information with their usage information, "they kind of crossed a line in what they proposed they wanted to do," said David Fraser, a partner at the law firm McInnes Cooper, who specializes in privacy issues. "If any other telco was looking at doing that before, they've mostly changed their mind. " Although the review was not specifically focused on marketing or ads, the CRTC said in its decision it was taking steps "to ensure that personal information collected for the purpose of managing internet traffic is not used for other purposes and is not disclosed. " Bell ultimately chose to close its old marketing program, but it now has a new program — one that, following the privacy commissioner's recommendation, is opt-in. So there's no data sharing at all? Even though Canadian ISPs can't share personal information with third parties without your consent, it doesn't mean they're not sharing any data at all. Rogers, Bell and Telus, for example, say they may share de-identified information — data that has been stripped of personal information — with third parties, without your consent. This may be done for "research, planning, or product and service development," according to Telus, while Bell says it may be done "to provide social benefits (such as assisting municipalities with traffic planning) and to develop analytic marketing reports for our use and for the use of our partners. " What's the problem with that? Researchers have shown that, in some cases, users can be re-identified when de-identified data is combined with other sources of data. It's enough of a concern that some companies explicitly forbid re-identification in their terms of use.
no
Data Privacy
Can Internet Service Providers sell user data without consent?
yes_statement
"internet" service providers can "sell" "user" "data" without "consent".. "user" "data" can be sold by "internet" service providers without "consent".
https://www.longislandpress.com/2017/04/03/internet-search-history-bill-sale-trump-schumer/
Critics Blast Bill Authorizing Sale of Users' Internet Search Histories ...
Joining a chorus of criticism from privacy advocates and ethics experts, Senate Minority Leader Chuck Schumer called on President Donald Trump Sunday to veto a controversial bill passed by Congress allowing Internet Service Providers (ISPs) to sell customer data to advertisers without their consent. “Signing this rollback into law would mean private data from our laptops, iPads and even our cellphones would be fair game for internet companies to sell and make a fast buck,” said Schumer. “An overwhelming majority of Americans believe that their private information should be just that—private—and not for sale without their knowledge. That’s why I’m publicly urging President Trump to veto this resolution.” The bill passed in the House of Representatives last week 215 to 205, despite Democrats opposing the measure and 15 Republicans voting “no.” It narrowly passed the U.S. Senate a week earlier. Republicans argued that the bill would put ISPs on fairer ground with Internet giants like Facebook and Netflix, two companies that already gather vast amount of personal data. The difference between those social media networks and entertainment companies and ISPs, however, is that internet providers can see everything a user does online, while Netflix is monitoring behavior within its ecosystem. The measure effectively prevents the Federal Communications Commission (FCC) from adopting rules it previously put in place last October restricting ISPs from profiting off their customers’ search history. If President Trump signs the bill into law, as expected, internet providers such as Optimum or Verizon FIOS on Long Island would be able to sell customer data—search history, what online stores you visit, etc.—to marketing companies. “What’s going to happen is you’re going to see more and more targeted ads when you surf online,” said Mark Grabowski, internet law and ethics professor at Adelphi University. “So, for example, if your kid’s teacher emails you that he’s struggling in Algebra, you might see ads about tutoring services. If you do a Google search for flights to Paris, expect to see ads from airlines and hotel websites. You get the idea.” “There’s lots of misinformation about this—although it’s still bad news,” Grabowski added. “In short, nothing is changing. You didn’t have online privacy to begin with, so you’re not losing anything.” When asked specific questions regarding whether it currently shares consumer data with advertisers and the bill’s ramifications on the company’s current privacy policy, Altice USA, which owns Cablevision, passed along a statement from The Internet & Television Association cheering the bill’s passage. Steps taken by Congress to “repeal the FCC’s misguided rules marks an important step toward restoring consumer privacy protections that apply consistently to all internet companies,” the statement read. “With a proven record of safeguarding consumer privacy, internet providers will continue to work on innovative new products that follow ‘privacy-by-design’ principles and honor the FTC [Federal Trade Commission]’s successful consumer protection framework. We look forward to working with policymakers to restore consistency and balance to online privacy protections.” Verizon did not respond to a request for comment. Privacy advocates vehemently objected to the proposed law. “Should President Donald Trump sign S.J. Res. 34 into law, big internet providers will be given new powers to harvest your personal information in extraordinarily creepy ways,” Electronic Frontier Foundation, a privacy advocacy group, said. “They will watch your every action online and create highly personalized and sensitive profiles for the highest bidder. All without your consent. This breaks with the decades long legal tradition that your communications provider is never allowed to monetize your personal information without asking for your permission first.” Anyone with possession of an internet user’s search history can glean vasts amounts of insight into that person: potential health problems, political leaning, sexual orientation, purchase habits and more. Marketers can then place advertisements on webpages based on a user’s search history. “Reversing those protections is a dream for cable and telephone companies, which want to capitalize on the value of such personal information,” Tom Wheeler, former FCC chairman under the Obama administration, wrote in The New York Times. “I understand that network executives want to produce the highest return for shareholders by selling consumers’ information. The problem is they are selling something that doesn’t belong to them. “Here’s one perverse result of this action,” he continued. “When you make a voice call on your smartphone, the information is protected: Your phone company can’t sell the fact that you are calling car dealerships to others who want to sell you a car. But if the same device and the same network are used to contact car dealers through the internet, that information—the same information, in fact—can be captured and sold by the network. To add insult to injury, you pay the network a monthly fee for the privilege of having your information sold to the highest bidder.” Consumers have become savvier in recent years to protect their personal information by turning to encrypted messaging services and Virtual Private Networks (VPN), the latter of which can disguise where a person is using the internet. One company offering such protections, NordVPN, said it saw an 86-percent surge in inquiries in the first few days after Congress passed the law. Experts in internet privacy acknowledged that consumers had little privacy even prior to Congress passing the bill. The FCC rule the bill repealed had not even gone into effect, and companies have long been gathering consumer data, but previously required permission before putting it for sale. “Deregulation of internet service providers has been a disaster for Americans,” Adelphi’s Grabowski said. “ISPs haven’t delivered the promises they made when they begged Congress to end common carriage regulations in the ’90s. Twenty years later, we’ve gone from being a pioneer in internet service to now lagging behind developing countries in terms of access, cost, speed, privacy protections and more. And this situation will probably continue to get worse.” About the Author Rashed Mian has been covering local news for the Long Island Press since 2011. He graduated from Hofstra University in 2010 where he studied print journalism. Rashed, the staff’s multimedia reporter, covers daily news for the web, shoots/edits feature videos and writes about civil liberties. He loves Afghan food and sports. Rashed is also a caffeine freak. Email: [email protected]. Twitter: rashedmian
34 into law, big internet providers will be given new powers to harvest your personal information in extraordinarily creepy ways,” Electronic Frontier Foundation, a privacy advocacy group, said. “They will watch your every action online and create highly personalized and sensitive profiles for the highest bidder. All without your consent. This breaks with the decades long legal tradition that your communications provider is never allowed to monetize your personal information without asking for your permission first.” Anyone with possession of an internet user’s search history can glean vasts amounts of insight into that person: potential health problems, political leaning, sexual orientation, purchase habits and more. Marketers can then place advertisements on webpages based on a user’s search history. “Reversing those protections is a dream for cable and telephone companies, which want to capitalize on the value of such personal information,” Tom Wheeler, former FCC chairman under the Obama administration, wrote in The New York Times. “I understand that network executives want to produce the highest return for shareholders by selling consumers’ information. The problem is they are selling something that doesn’t belong to them. “Here’s one perverse result of this action,” he continued. “When you make a voice call on your smartphone, the information is protected: Your phone company can’t sell the fact that you are calling car dealerships to others who want to sell you a car. But if the same device and the same network are used to contact car dealers through the internet, that information—the same information, in fact—can be captured and sold by the network. To add insult to injury, you pay the network a monthly fee for the privilege of having your information sold to the highest bidder.” Consumers have become savvier in recent years to protect their personal information by turning to encrypted messaging services and Virtual Private Networks (VPN), the latter of which can disguise where a person is using the internet. One company offering such protections, NordVPN, said it saw an 86-percent surge in inquiries in the first few days after Congress passed the law. Experts in internet privacy acknowledged that consumers had little privacy even prior to Congress passing the bill.
yes
Data Privacy
Can Internet Service Providers sell user data without consent?
yes_statement
"internet" service providers can "sell" "user" "data" without "consent".. "user" "data" can be sold by "internet" service providers without "consent".
https://www.pbs.org/newshour/politics/lament-end-internet-privacy-read
Before you lament the end of your internet privacy, read this | PBS ...
Before you lament the end of your internet privacy, read this Top Republicans pushed a measure through the House on Tuesday that overturns Obama-era regulations intended to protect consumers’ data from being shared with advertisers without consent. If you’re reading this story on a computer or internet-connected device, that obviously includes you. The bill, which passed 215-205 in the House, pulls back legislation passed by Congress in 2016. Originally proposed by the Federal Communications Commission, the measure would have broadened FCC privacy rules so they also applied to broadband internet service providers. In other words, it required companies like AT&T, Verizon and others to get consent from customers like you before sharing (or selling) your personal data and web browsing history with advertisers. A companion bill passed 50-48 last week in the Senate. President Donald Trump signed the bill Monday. Now, before you lament the end of your internet privacy — take a deep breath. As Wired reporter Klint Finley told the NewsHour, those FCC rules never actually went into effect, meaning technically, Tuesday’s measure doesn’t change anything. The rules to protect customer data were passed in October of last year but wouldn’t have taken effect until December 2017, Finley said. So the bill passed on Tuesday simply blocks those rules from taking effect, Finley said. That said, Tuesday’s measure does create some wrinkles in the debate over consumer privacy in the rapidly growing Internet of Things. Namely, the measure blocks the FCC not only from implementing the 2016 rules, but pursuing others like them. “Mostly it means that internet service providers now have the go-ahead to sell data,” Finley explained. “It was already technically legal, but if any companies were holding off on doing it while they waited to see if the laws went into effect or not, they don’t have to wait anymore.” Let’s dissect this together. How did we get here? FCC’s 2016 privacy rules were a follow up to the 2015 Open Internet Order, regulations that defined the Internet as a public utility and set the stage for today’s net-neutrality rules. The Open Internet Order also put the FCC in charge of privacy regulations. Until 2015, the Federal Trade Commission, or FTC, held jurisdiction over ISPs, according to a FCC statement. However, two years ago, the FCC “stripped the FTC of its authority over internet service providers.” The debate over net neutrality can be summed up in this question, as posed by Neil Irwin of The New York Times: “Is access to the Internet more like access to electricity, or more like cable television service?” Why is Congress taking this up again? The Obama-era bill was passed in October 2016. But lawmakers took advantage of the Congressional Review Act, a rare procedural move that permits lawmakers to reconfigure any regulation they disagree with. The bill has sparked debate from both sides of the aisle. Some House Republicans say requiring the FCC to get consent from consumers before sharing data approaches government overreach. House Democrats say not putting those protections in place sets up a poor precedent for online privacy. ISPs have pushed back against privacy regulations. Their issue: Sites like Facebook and Google are regulated by the Federal Trade Commission (FTC) and are therefore governed by regulations that do not force them to obtain customer consent before collecting and selling personal data. Notice how right after you’ve been searching for a copy of Uncharted: The Nathan Drake Collection on Amazon, advertisers bombard you with video game ads once you’ve switched back to Facebook. What does the latest bill actually do? “Historically, regulations have treated that data as the property of the consumer,” GeekWire wrote. Under the new bill, “it will be viewed more like the property of internet providers.” This means, ISPs could sell your personal information without consent. They can view anything from your browsing history and geolocation to the applications you use on the web. The bill also makes it harder for the FCC to pursue policies like those passed last fall, says Ernesto Falcon, legislative counsel for the advocacy group Electronic Frontier Foundation. Congress has “in essence created a law that contradicts the existing privacy law the FCC is tasked with enforcing,” Falcon told the NewsHour. Why do ISPs want this information, and what do they do with it? As the digital economy expands, ISPs have become increasingly interested in improving their presence within advertising, Business Insider reported. “The traditional means is to collect information to create a profile and market it to advertisers who attempt to connect that user to goods they believe they will purchase,” Falcon said. In theory, anyone from insurance companies, airlines, banks and retailers to political parties or, critics fear, the government, could buy data profiles of consumers. What’s next? Then-FCC Commissioner Ajit Pai (L) testifies at a House Appropriations Financial Services and General Government Subcommittee hearing on Capitol Hill in 2015. Photo by Kevin Lamarque/Reuters Trump signed the bill into law Monday. Even though the measure repeals internet privacy rules passed in October, the FCC still reviews privacy cases involving customer privacy on the Internet. FCC Chairman Ajit Pai told the NewsHour in a statement that “the FCC will work with the FTC to ensure that consumers’ online privacy is protected through a consistent and comprehensive framework. In my view, the best way to achieve that result would be to return jurisdiction over broadband providers’ privacy practices to the FTC, with its decades of experience and expertise in this area.” Republican Sen. John Thune told Axios that he’s open to passing additional privacy protections in order to reach a legislative compromise on net neutrality “if that were something that it took to get Democrats to the table.” How to protect your data Let’s be honest: Whether you know it or not, your internet privacy has more than likely been jeopardized at some point. When it comes to dealing with ISPs, educating yourself on what to expect goes a long way, Neema Singh Guliani, legislative counsel for the American Civil Liberties Union, told the NewsHour. If the bill becomes law, consumers could still explicitly opt out from having their data shared, even if it isn’t obvious how to do it. “Consumers can call their providers and opt out of having their information shared,” Guliani said. “Consumers can pressure companies to be more transparent and I think there’s an opportunity to pressure companies to implement good practices and for consumers to say ‘I think that you should require opt-in consent and if you’re not, why not?’” Now, some ISPs do offer some sort of getaway from their targeted advertising. But as noted in The Verge, you may have to dig order within a company’s linear notes in order to find protections for yourself. Falcon said utilizing a VPN, or Virtual Private Network, could provide a safeguard, but noted that it’s not a bulletproof method. “People can start using VPNs but they aren’t a perfect defense and ISPs are going to start using our browser information and application data without our permission,” Falcon said. “Ultimately people must let their member of Congress know they value their privacy. If they voted against repeal, encourage them to push for legislation to restore our privacy rights.” Left: The House voted Tuesday to undo Obama-era regulations that would have forced internet service providers like Comcast and Verizon to ask customers' permission before they could use or sell much of their personal information. Photo by jamdesign/via Adobe
ISPs have pushed back against privacy regulations. Their issue: Sites like Facebook and Google are regulated by the Federal Trade Commission (FTC) and are therefore governed by regulations that do not force them to obtain customer consent before collecting and selling personal data. Notice how right after you’ve been searching for a copy of Uncharted: The Nathan Drake Collection on Amazon, advertisers bombard you with video game ads once you’ve switched back to Facebook. What does the latest bill actually do? “Historically, regulations have treated that data as the property of the consumer,” GeekWire wrote. Under the new bill, “it will be viewed more like the property of internet providers.” This means, ISPs could sell your personal information without consent. They can view anything from your browsing history and geolocation to the applications you use on the web. The bill also makes it harder for the FCC to pursue policies like those passed last fall, says Ernesto Falcon, legislative counsel for the advocacy group Electronic Frontier Foundation. Congress has “in essence created a law that contradicts the existing privacy law the FCC is tasked with enforcing,” Falcon told the NewsHour. Why do ISPs want this information, and what do they do with it? As the digital economy expands, ISPs have become increasingly interested in improving their presence within advertising, Business Insider reported. “The traditional means is to collect information to create a profile and market it to advertisers who attempt to connect that user to goods they believe they will purchase,” Falcon said. In theory, anyone from insurance companies, airlines, banks and retailers to political parties or, critics fear, the government, could buy data profiles of consumers. What’s next? Then-FCC Commissioner Ajit Pai (L) testifies at a House Appropriations Financial Services and General Government Subcommittee hearing on Capitol Hill in 2015. Photo by Kevin Lamarque/Reuters Trump signed the bill into law Monday. Even though the measure repeals internet privacy rules passed in October, the FCC still reviews privacy cases involving customer privacy on the Internet.
yes
Data Privacy
Can Internet Service Providers sell user data without consent?
yes_statement
"internet" service providers can "sell" "user" "data" without "consent".. "user" "data" can be sold by "internet" service providers without "consent".
https://www.techtarget.com/searchdatamanagement/definition/consumer-privacy
What is Consumer Privacy? | Definition from TechTarget
Positive customer experiences can boost both a company’s reputation and bottom line, but negative ones can have the opposite effect. Crafting and implementing a customer experience management strategy is essential. This guide explores techniques and tools that can help organizations construct and implement an effective customer experience management plan. consumer privacy (customer privacy) What is consumer privacy (customer privacy)? Consumer privacy, also known as customer privacy, involves the handling and protection of the sensitive personal information provided by customers in the course of everyday transactions. The internet has evolved into a medium of commerce, making consumer data privacy a growing concern. This form of information privacy surrounds the privacy and protection of a consumer's personal data when collected by businesses. Businesses implement standards for consumer privacy to conform to local laws and to increase consumer trust, as many consumers care about the privacy of their personal information. Consumer privacy issues Personal information, when misused or inadequately protected, can result in identity theft, financial fraud and other crimes that collectively cost people, businesses and governments millions of dollars each year. Common consumer privacy features offered by corporations and government agencies include the following: The popularity of e-commerce and big data in the early 2000s cast consumer data privacy issues in a new light. While the World Wide Web Consortium's Platform for Privacy Preferences Project (P3P) emerged to provide an automated method for internet users to divulge personal information to websites, the widespread gathering of web activity data was largely unregulated. Additionally, P3P was only implemented on a small number of platforms. Since then, data has taken on a new value for corporations. As a result, almost any interaction with a large corporation -- no matter how passive -- results in the collection of consumer data. This is partially because more data leads to improved online tracking, behavioral profiling and data-driven targeted marketing. The surplus of valuable data, combined with minimal regulation, increases the chance that sensitive information could be misused or mishandled. For example, Meta collects a large amount of personal Facebook user data, including how much time users spend on the app, checked-in locations, posted content metadata, messenger contacts and items bought through Marketplace. Meta can then share user data with third-party apps, advertisers and other Meta companies. The collected data is used for targeted advertising. If not properly protected, data leaks can occur -- which happened to Meta in 2018 with the Facebook-Cambridge Analytica leak. Cambridge Analytica used Facebook user data to create voter profiles for political campaigns. The personal data of 87 million Facebook users was consequently leaked. Laws that protect consumer privacy Consumer privacy is derived from the idea of personal privacy, which, although not explicitly outlined in the U.S. Constitution, has been put forward as an essential right in several legal decisions. The Ninth Amendment is often used to justify a broad reading of the Bill of Rights to protect personal privacy in ways that aren't specifically outlined but implied. Despite this, there's currently no comprehensive legal standard for data privacy at the federal level in the U.S. There have been attempts at creating one, but none have been successful. For example, in 2017, the U.S. government reversed a federal effort to broaden data privacy protection by requiring internet service providers to obtain their customers' consent prior to using their personal data for advertising and marketing. Another comprehensive federal consumer privacy bill, the Consumer Online Privacy Rights Act, was proposed in late 2019. The bill has yet to pass, and many speculate that getting it approved might be a struggle. Currently, the U.S. relies on a combination of state and federal laws enforced by various independent government agencies, such as the Federal Trade Commission (FTC). These can sometimes lead to incongruities and loopholes in U.S. privacy law since there's no central authority enforcing them. By contrast, legislation has enforced high standards of data privacy protection in Europe. For example, the European Union (EU) passed the General Data Protection Regulation (GDPR) in 2018, unifying data privacy laws across the EU and updating existing laws to better encompass modern data collection and exchange practices. The law also had a significant effect on nations outside of Europe -- including the U.S. -- because multinational corporations that serve EU citizens were forced to rewrite their privacy policies to remain in compliance with the new regulation. Companies that didn't comply could incur huge financial penalties. The most notable example is Google, which was fined $57 million under the GDPR in 2019 for failing to adhere to transparency and consent rules in the setup process for Android phones. The GDPR is touted by many as the first legislation of its kind and has influenced other nations and states within the U.S. to adopt similar regulations. The reason the GDPR is possible for the EU is largely because many European nations have central data privacy authorities to enforce it. While the U.S. doesn't have a unified data privacy framework, it does have a collection of laws that address data security and consumer privacy in various sectors of industry. Federal laws that are relevant to consumer privacy regulations and data privacy in the U.S. include the following: The Privacy Act of 1974 governs the collection and use of information about individuals in federal agencies' systems. It prohibits the disclosure of an individual's records without their written consent, unless the information is shared under one of 12 statutory exceptions. The Financial Modernization Act of 1999 governs how companies that provide financial products and services collect and distribute client information, as well as preventing companies from accessing sensitive information under false pretenses. When defining client confidentiality, this act makes distinctions between a customer and a consumer. A customer must always be notified of privacy practices, whereas a consumer must only be notified under certain conditions. The Family Educational Rights and Privacy Act (FERPA) of 1974 protects the privacy of student education records and applies to all schools that receive funding from the U.S. Department of Education. Many of these federal laws, while providing reasonable privacy protections, are considered by many to be lacking in scope and out of date. However, at the state level, several important data privacy laws have recently been passed, with more pending approval. Because these laws were passed recently, they more adequately protect consumers in a way that applies to current data exchange practices. The most notable of these state laws is the California Consumer Privacy Act (CCPA), which was signed in 2018 and took effect on Jan. 1, 2020. The law introduces a set of rights that previously hadn't been outlined in any U.S. law. Under the CCPA, consumers have several privileges that a business must honor upon verifiable consumer requests. The law entitles consumers to do the following: Know what personal data about them is being collected. Know if their personal data is being sold and to whom. Say no to the sale of personal information. Access their collected personal data. Delete data being kept about them. Not be penalized or charged for exercising their rights under the CCPA. Require a parent or guardian to provide affirmative consent -- opting in -- to the collection of personal data from a child under the age of 13; for children age 13-16, that consent can come from the child. The law applies to corporations that either have a gross annual revenue of over $25 million per year or collect data on 100,000 or more California residents. Companies that don't comply face sizeable penalties and fines. The law also only applies to residents of California currently. However, it's expected to set a precedent for other states to take similar action. Several companies have also promised to honor the rights granted under the CCPA for consumers in all 50 states, so as not to have an entirely different privacy policy for Californians. Participating businesses include Microsoft, Netflix, Starbucks and UPS. The following states are enacting or currently practicing similar laws: Vermont. In 2018, the state approved a law that requires data brokers to disclose consumer data collected and grants consumers the right to opt out. Nevada. In 2019, the state enacted a law allowing consumers to refuse the sale of their data. Maine. The state has enacted legislation that prohibits broadband internet service providers from using, disclosing, selling or allowing access to customer data without explicit consent. New York. The state passed a bill known as the New York Privacy Act on June 9, 2023, after its third reading in the New York State Senate. The act is modeled after -- and aims to surpass -- the CCPA. Critics of these laws worry they will still fall short and create loopholes that could be exploited by data brokers. Also, increased compliance regulations force corporations to adapt to abide, which creates more work, potential bottlenecks and could hinder the development of valuable technology and services. A multitude of unique state laws can also create conflicting compliance requirements and end up creating new problems for consumers and corporations alike. However, privacy advocates view this somewhat concurrent state-level effort as a step toward comprehensive federal legislation in the future. Agencies that regulate data privacy The following agencies regulate data privacy in the U.S.: The FTC requires companies to disclose their corporate privacy policies to customers. The FTC can take legal action against companies that violate customer privacy policies or compromise their customers' sensitive personal information. It also provides resources for those who want to learn more about privacy policies and best practices, as well as information for victims of privacy-related crimes, such as identity theft. The FTC is currently the most involved agency in regulating and defending data privacy in the U.S. The Consumer Financial Protection Bureau protects consumers in the financial sector. It has outlined principles that protect consumers when authorizing a third party to access their financial data and regulates the provision of financial services and products using these principles. The U.S. Department of Education administers and enforces the FERPA and aids schools and school districts using best practices for handling student information. Students, especially those paying for secondary education, are consumers of an educational service. The Securities and Exchange Commission enforces rules surrounding the disclosure of data breaches and general data protection. Why consumer privacy protection is necessary A series of high-profile data breaches in which corporations failed to protect consumer data from internet hacking has drawn attention to shortcomings in personal data protection. Several such events were followed by government fines and forced resignations of corporate officers. In 2017, the litany of customer data breaches included Uber, Yahoo and Equifax, each providing unauthorized access to hundreds of thousands -- if not millions -- of customer records. These high-profile data breaches have drawn attention to shortcomings in data protection. Consumer privacy issues have arisen as prominent web companies like Google and Meta moved to the top of business ranks using web browser data to gain revenue. Other companies, including data brokers, cable providers and cellphone manufacturers, have also sought to profit from related data products. The privacy measures offered to users by these companies are also insufficient, as there's a limit to how much protection a social media user can get by self-regulating their content using an app's privacy settings. This lack of privacy has also affected user trust. A 2022 study from Insider Intelligence found that, on average, 35% of users on different social media platforms felt safe posting on those platforms. Likewise, only 18% of users felt that Meta's Facebook protected their privacy. Concern for corporate use of consumer data led to the creation of the GDPR to curb data misuse. The regulation requires organizations doing business in the EU to appropriately secure personal data and lets individuals access, correct and even erase their personal data. Such compliance requirements have led to a renewed emphasis on data governance, as well as data protection techniques such as anonymization and masking. Addressing consumer privacy as a priority is also a good way to increase customer trust. According to a report from the International Association of Privacy Professionals, 64% of consumers expressed an increase in trust for companies that provide a clear explanation of their privacy policies. Future of consumer privacy The recent enactment of consumer privacy laws, such as the New York Privacy Act, indicates a heightened concern for consumer privacy among various institutions. As technology advances and internet-connected devices are increasingly used in everyday tasks and transactions, data becomes more detailed and, therefore, becomes more valuable to those that can profit from it. Newer privacy laws and the ending of third-party cookies might help further protect consumer privacy, for example, but companies can still use zero- and first-party data to market content. As another example, artificial intelligence (AI) and machine learning algorithms often require massive amounts of data to pre-train them, establish patterns and model intelligence. The rapidly growing investment in these data-hungry technologies indicates the likelihood of a sustained interest in data collection for the foreseeable future, and consequently an increased need for consumer privacy policies and frameworks that address new trends in data collection. One case that exemplifies the way these emerging technologies might continue to stir up privacy concerns in the future is Project Nightingale. Project Nightingale was the name of the partnership between Google and Ascension -- one of the largest healthcare systems in the U.S. In late 2019, Google gained access to over 50 million patient health records through the partnership, with the aim of using the data to create tools that enhance patient care. Google also expressed plans to use emergent medical data in this process, which is nonmedical data that can be turned into sensitive health information using AI. However, questions remained about the type and amount of information that would be provided to Google, if notice would be given to patients in advance, if patients could opt out, how many Google employees would be given access to health data and how those Google employees would gain approval to access that data. Although the partnership aimed to help millions, potentially changing the healthcare landscape for the better, there were notable privacy concerns, as Ascension healthcare providers and their patients were unaware that their medical records were being distributed. Some speculate that HIPAA's rules surrounding third-party use of data are out of date, allowing for a concerning lack of transparency in the partnership. Others believe most of the concern surrounding the partnership is misplaced. Overall, the competing trends of increasingly advanced data collection technology and improved consumer privacy measures and policies are likely to define the future of consumer privacy. Corporations will likely find new data collection methods, and consumers will likely react with an increased expectation of transparency.
The law introduces a set of rights that previously hadn't been outlined in any U.S. law. Under the CCPA, consumers have several privileges that a business must honor upon verifiable consumer requests. The law entitles consumers to do the following: Know what personal data about them is being collected. Know if their personal data is being sold and to whom. Say no to the sale of personal information. Access their collected personal data. Delete data being kept about them. Not be penalized or charged for exercising their rights under the CCPA. Require a parent or guardian to provide affirmative consent -- opting in -- to the collection of personal data from a child under the age of 13; for children age 13-16, that consent can come from the child. The law applies to corporations that either have a gross annual revenue of over $25 million per year or collect data on 100,000 or more California residents. Companies that don't comply face sizeable penalties and fines. The law also only applies to residents of California currently. However, it's expected to set a precedent for other states to take similar action. Several companies have also promised to honor the rights granted under the CCPA for consumers in all 50 states, so as not to have an entirely different privacy policy for Californians. Participating businesses include Microsoft, Netflix, Starbucks and UPS. The following states are enacting or currently practicing similar laws: Vermont. In 2018, the state approved a law that requires data brokers to disclose consumer data collected and grants consumers the right to opt out. Nevada. In 2019, the state enacted a law allowing consumers to refuse the sale of their data. Maine. The state has enacted legislation that prohibits broadband internet service providers from using, disclosing, selling or allowing access to customer data without explicit consent. New York. The state passed a bill known as the New York Privacy Act on June 9,
no
Data Privacy
Can Internet Service Providers sell user data without consent?
no_statement
"internet" service providers cannot "sell" "user" "data" without "consent".. selling "user" "data" without "consent" is not allowed for "internet" service providers.
https://www.asc.upenn.edu/news-events/news/americans-dont-understand-what-companies-can-do-their-personal-data-and-thats-problem
Americans Don't Understand What Companies Can Do With Their ...
Americans Don’t Understand What Companies Can Do With Their Personal Data — and That’s a Problem A new survey of 2,000 Americans finds that people don’t understand what marketers are learning about them online and don’t want their data collected, but feel powerless to stop it. By Hailey Reissman Have you ever had the experience of browsing for an item online, only to then see ads for it everywhere? Or watching a TV program, and suddenly your phone shows you an ad related to the topic? Marketers clearly know a lot about us, but the extent of what they know, how they know it, and what they’re legally allowed to know can feel awfully murky. In a new report, “Americans Can’t Consent to Companies’ Use of Their Data,” researchers asked a nationally representative group of more than 2,000 Americans to answer a set of questions about digital marketing policies and how companies can and should use their personal data. Their aim was to determine if current “informed consent” practices are working online. They found that the great majority of Americans don’t understand the fundamentals of internet marketing practices and policies, and that many feel incapable of consenting to how companies use their data. As a result, the researchers say, Americans can’t truly give informed consent to digital data collection. The survey revealed that 56% of American adults don’t understand the term "privacy policy," often believing it means that a company won't share their data with third parties without permission. In actual fact, many of these policies state that a company can share or sell any data it gathers about site visitors with other websites or companies. Perhaps because so many Americans feel that internet privacy feels impossible to comprehend — with “opting-out” or “opting-in,” biometrics, and VPNs — they don’t trust what is being done with their digital data. Eighty percent of Americans believe that what companies know about them can cause them harm. “People don't feel that they have the ability to protect their data online — even if they want to,” says lead researcher Joseph Turow, Robert Lewis Shayon Professor of Media Systems & Industries at the Annenberg School for Communication at the University of Pennsylvania. What Americans Know — or Don’t Americans often encounter the idea of “informed consent” in the medical context: for example, you should understand what your doctor is recommending and the possible benefits and risks before you agree or disagree to a procedure. That same principle has been applied by lawmakers and policymakers with respect to the commercial internet. People must either explicitly “opt in” for marketers to take and use data about them, or have the ability to “opt out.” This presupposes two things: that people are informed — that they understand what is happening to their data — and that they’ve provided consent for it to happen. In order to test both of these elements, the survey presented 17 true/false statements about internet practices and policies and asked participants to mark them as true or false, or indicate that they did not know the answer. Statements included, “A company can tell that I have opened its email even if I don’t click on any links” (which is true) and “It is illegal for internet marketers to record my computer’s IP address” (which is false). The chart shows that the vast majority (77%) of those surveyed would have received an F in most American classrooms (0-53% correct). Another 15% would have received a D (54-65%), 6% would have received a C (71-76%), 1% would have received a B (82-88% correct), and a lone single person would have gotten an A, with 94% correct. Fully 77% of those surveyed answered 9 or fewer questions correctly — a failing grade in a typical classroom. Only one person in the entire 2,000-person sample would have received an “A” on the test. Only around 1 in 3 Americans knows it is legal for an online store to charge people different prices depending on where they are located. More than 8 in 10 Americans believe, incorrectly, that the federal Health Insurance Portability and Accountability Act (HIPAA) stops apps from selling data collected about app users’ health to marketers. Fewer than one in three Americans know that price-comparison travel sites such as Expedia or Orbitz are not obligated to display the lowest airline prices. Fewer than half of Americans know that Facebook’s user privacy settings allow users to limit some of the information about them shared with advertisers. “Being wrong about such facts can have real consequences,” the report notes. For example, a person who uses a fertility app to facilitate family planning may not realize that U.S. health privacy laws don’t prevent the app from selling their fertility data to a third party. In addition, retailers can sell data on who has shopped for fertility-related items, and in many cases, internet service providers can sell your browser search history. What if your employer or health insurer had that information? In the wake of the Dobbs decision that allows states to regulate abortion, experts also fear that the availability of this seemingly personal data may leave individuals legally vulnerable as well. “We the Resigned” In this study, and others Turow and colleagues have conducted in the past, virtually all Americans agree that they want to have control over what marketers can learn about them online. But at the same time, they see that outcome as virtually impossible. Joseph Turow, Ph.D. This belief that control is out of your hands and that it’s pointless to try and change a situation, is called resignation, Turow says. Most Americans are resigned to living in a world where marketers taking and using your data is inevitable. And Turow has seen a large uptick in resignation to privacy intrusions among Americans. In 2015, his research showed that 58% percent of Americans were resigned. Now that figure is up to 74%. “The levels of resignation and the levels of distrust are huge,” he says. “Only 14% of Americans believe that companies can be trusted to use their data with their best interests in mind, so the vast majority of people who use the internet are essentially relinquishing their data to entities that they don’t trust.” Calling on Congress to Act The study found that nearly 80% of Americans believe it is urgent for Congress to act now in order to regulate how companies' can use personal information. “We live in a society where there's a sea of data collected about individuals that people don't understand, know they don’t understand, are distrustful of, resigned to and believe can harm them,” Turow says. “At the same time, the kinds of technologies that are being used to track people are getting more sophisticated all the time.” He worries that the longer governments wait to change things, the harder it will be to control any of our data. “For about 30 years, big companies have been allowed to shape a whole environment for us, essentially without our permission,” he says. “And 30 years from now, it might be too late to say, ‘This is totally unacceptable.’” Conclusions and Solutions To date, privacy laws have been focused on individual consent, favoring companies over individuals, putting the onus on internet users to make sense of whether — and how — to opt in or out. “We have data now that shows very strongly that the individual consent model isn’t working,” Turow says. He and his co-researchers suggest that policymakers “flip the script” and restrict the advertising-based business model to contextual advertising, where companies can target people based only on the environment in which they find customers. For example, if you visited a website about cars, automotive-related companies would be allowed to display advertisements to you, though without any data on your individual behavior on that website. The researchers realize they are calling for a paradigm shift in information-economy law and corporate practice, but believe consumers deserve dramatically more privacy than they currently have — and should get it without the burden of becoming technology experts. Turow hopes that this research will jumpstart a new conversation about privacy and consent — one that will encourage individuals to shake off their resignation. “It can feel too late to change things,” Turow says, “but I think we should try.” The report is entitled, “Americans Can’t Consent to Companies’ Use of Their Data: They Admit They Don’t Understand It, Say They’re Helpless To Control It, and Believe They’re Harmed When Firms Use Their Data – Making What Companies Do Illegitimate.” In addition to Turow, authors include Yphtach Lelkes (Annenberg School for Communication, University of Pennsylvania), Nora A. Draper (University of New Hampshire), and Ari Ezra Waldman (Northeastern University). Funding for the project came from an unrestricted grant from Facebook, which was not involved in the research.
As a result, the researchers say, Americans can’t truly give informed consent to digital data collection. The survey revealed that 56% of American adults don’t understand the term "privacy policy," often believing it means that a company won't share their data with third parties without permission. In actual fact, many of these policies state that a company can share or sell any data it gathers about site visitors with other websites or companies. Perhaps because so many Americans feel that internet privacy feels impossible to comprehend — with “opting-out” or “opting-in,” biometrics, and VPNs — they don’t trust what is being done with their digital data. Eighty percent of Americans believe that what companies know about them can cause them harm. “People don't feel that they have the ability to protect their data online — even if they want to,” says lead researcher Joseph Turow, Robert Lewis Shayon Professor of Media Systems & Industries at the Annenberg School for Communication at the University of Pennsylvania. What Americans Know — or Don’t Americans often encounter the idea of “informed consent” in the medical context: for example, you should understand what your doctor is recommending and the possible benefits and risks before you agree or disagree to a procedure. That same principle has been applied by lawmakers and policymakers with respect to the commercial internet. People must either explicitly “opt in” for marketers to take and use data about them, or have the ability to “opt out.” This presupposes two things: that people are informed — that they understand what is happening to their data — and that they’ve provided consent for it to happen. In order to test both of these elements, the survey presented 17 true/false statements about internet practices and policies and asked participants to mark them as true or false, or indicate that they did not know the answer. Statements included, “A company can tell that I have opened its email even if I don’t click on any links” (which is true) and “It is illegal for internet marketers to record my computer’s IP address” (which is false).
yes
Data Privacy
Can Internet Service Providers sell user data without consent?
no_statement
"internet" service providers cannot "sell" "user" "data" without "consent".. selling "user" "data" without "consent" is not allowed for "internet" service providers.
https://www.privacypolicies.com/blog/isp-tracking-you/
Your ISP Is Tracking Every Website You Visit: Here's What We Know ...
Your ISP Is Tracking Every Website You Visit: Here's What We Know Last updated on 01 July 2022 by PrivacyPolicies.com Legal Writing Team Despite the privacy precautions you take, there is someone who can see everything you do online: your Internet Service Provider (ISP). When it comes to online privacy, there are a lot of steps you can take to clean up your browsing history and prevent sites from tracking you. Most modern web browsers include some form of privacy mode, which allows you to surf without saving cookies, temporary files, or your browsing history to your computer. Many browsers also include a "Do Not Track" mode, which automatically tells websites you want to opt-out of tracking cookies and similar technologies used for advertising purposes. While these solutions may keep advertisers and anyone using your computer from viewing your browsing history, your ISP can still watch your every move. Why Is Your ISP Tracking You? There probably isn't someone sitting behind his desk at your ISP watching every click you make, but that doesn't mean your browsing history isn't getting stored somewhere on their systems. Your ISP tracks your clicks for a number of reasons. For them, you browsing history is a revenue stream. Many ISPs compile anonymous browsing logs and sell them to marketing companies. Some Internet providers are even moving to make privacy a premium add-on, using your Internet history to market to you in much the same way websites do, unless you pay an additional monthly fee. What's more, the data your ISP collects may be accessed by outside organizations, such as the police department or another government agency. If provided with a subpoena, your ISP is legally required to provide whatever information they have on you. Why Should You Care? The obvious question here is, what does it matter? We're advertised to all day long on the Internet, what's a few more targeted ads? And who cares if the government uses ISP information to bust some criminals or crack down on terrorism. That's a good thing, right? If only it were that simple. For most people, knowing the government could view our online activity probably doesn't seem too scary. But if you live under an oppressive government, even seemingly innocent online activity can be very dangerous. Plus, in an era of almost-daily data breaches, assuming your information is safe with anyone is naive at best. Even ISPs can be affected. So take a moment and think about everything your ISP could potentially know about you. Maybe you use BitTorrent to download the occasional copyrighted song or movie. Maybe you've been viewing sites you would prefer your family not know about. If you did some research on cancer warning signs, would you want your health insurance provider to know? And do you really want your boss to find out how actively you're looking for a new job? Your browsing history says a lot about you, and most of us would prefer that it stayed between us and our computer. Since your Internet Service Provider stands between you and everything online, you can't completely hide from them. The best you can do is confuse them by covering your tracks. How The Onion Router (Tor) Can Help The Tor Project was originally sponsored by the U.S. Naval Research Laboratories as a means of protecting sensitive government communications. It is now a non-profit organization dedicated to improving online privacy tools. When you use the Tor Browser, your activity is encrypted and sent across a network of Tor servers, making it much harder to trace back to your computer. Let's say, for instance, that you are trying to speak out against your government's very strict censorship laws. Doing so on a regular browser could land you in jail or worse. By using Tor, when the government tries to trace that activity, they will see it linked to random servers around the world, not your computer. There are a number of other anonymous browser projects, including I2P and Freenet, but Tor remains by far the most popular. For anyone who wants to completely encrypt their Internet experience, some Linux-based operating systems, such as Tails, utilize the Tor Network for handling all Internet activity, even if it's not browser based. These type of anonymous browsing tools have developed an unfortunate reputation, because the same technology that makes it ideal for protecting user privacy also makes it ideal for conducting illegal activities online. VPNs and Proxies Virtual Private Networks (VPN) are most commonly used by businesses to allow employees to work remotely. When you log in from your home or while traveling, the VPN provides an encrypted connection to your work's network, allowing you to work just as security as if you were in the office. Your browsing history over the VPN is not viewable by your ISP, but it may viewable by your employer. A number of companies now provide VPN access for regular Internet users. Like VPN for work, these systems allow you to encrypt your online activity, so your ISP cannot track it. These type of private VPNs can be used to provide secure browsing while you're connected to a public Internet connection, or to mask your online activities from your ISP. Similar to VPNs, there are a number of proxy services that will hide your IP address and encrypt your online activity. Programs like Proxify can be installed on your device to allow anonymous browsing, while others like Anonymouse must be accessed through the provider's website. Be careful when choosing a VPN or proxy service. While they should all allow you to mask your activity from your ISP and the websites you visit, some of them may actually keep their own logs of your browsing activity. Be sure to check their terms of service; otherwise you may wind up paying for the same lack of privacy you were already getting! A Final World of Caution While the above tools are perfectly legal to utilize, the activities you choose to use them for are still governed by the same laws as everything else you do online. They may make it harder for your ISP or anyone else to track your activities, but they won't make it impossible. If you're doing something that deserves to be on the FBI's radar, don't expect to get away with it just because you're using Tor. And remember, privacy can be a very powerful tool, but everyone's privacy is put in jeopardy by those who abuse it.
Your ISP tracks your clicks for a number of reasons. For them, you browsing history is a revenue stream. Many ISPs compile anonymous browsing logs and sell them to marketing companies. Some Internet providers are even moving to make privacy a premium add-on, using your Internet history to market to you in much the same way websites do, unless you pay an additional monthly fee. What's more, the data your ISP collects may be accessed by outside organizations, such as the police department or another government agency. If provided with a subpoena, your ISP is legally required to provide whatever information they have on you. Why Should You Care? The obvious question here is, what does it matter? We're advertised to all day long on the Internet, what's a few more targeted ads? And who cares if the government uses ISP information to bust some criminals or crack down on terrorism. That's a good thing, right? If only it were that simple. For most people, knowing the government could view our online activity probably doesn't seem too scary. But if you live under an oppressive government, even seemingly innocent online activity can be very dangerous. Plus, in an era of almost-daily data breaches, assuming your information is safe with anyone is naive at best. Even ISPs can be affected. So take a moment and think about everything your ISP could potentially know about you. Maybe you use BitTorrent to download the occasional copyrighted song or movie. Maybe you've been viewing sites you would prefer your family not know about. If you did some research on cancer warning signs, would you want your health insurance provider to know? And do you really want your boss to find out how actively you're looking for a new job? Your browsing history says a lot about you, and most of us would prefer that it stayed between us and our computer. Since your Internet Service Provider stands between you and everything online, you can't completely hide from them. The best you can do is confuse them by covering your tracks.
yes
Petrology
Can Jasper be classified as a precious gemstone?
yes_statement
"jasper" is "classified" as a "precious" "gemstone".. "jasper" can be categorized as a "precious" "gemstone".
https://rocktumbler.com/tips/semiprecious-precious/
What are Semiprecious Stones? What are Precious Stones?
A Classification of the 1800s Gemstones were first classified into the categories of "precious stones" and "semiprecious stones" in the mid-1800s. These terms quickly became popular and are now used throughout the world. Most people who sell gems and jewelry are very familiar with the terms and have used them. The terms remain in common usage today. They are used in perhaps a majority of books about gems. They are in thousands of websites. They are commonly used in discussions throughout the gemstone and jewelry industry. However, the words "precious" and "semiprecious" are controversial. Some people do not like them. These people think that all stones are precious. They believe that any stone of beauty is a "precious gem". How could anyone disagree with that? What Are "Precious Stones"? "Precious stones" is a name that is usually used in reference to four types of gems: diamonds, rubies, sapphires, and emeralds. Precious stones are usually transparent and cut by faceting - like the stones shown in the accompanying image. Some people have included opal, jade, or pearls in the "precious stones" class, but these have not received persistent and widespread use. Separating stones into "precious" and "semiprecious" classes has led many people to believe that "precious stones" are more important and more valuable than "semiprecious stones." This idea is somewhat supported by the fact that diamonds, rubies, sapphires, and emeralds generally cost more per carat than semiprecious stones. Precious stones also account for over 98% of the dollar value of U.S. gemstone imports for consumption reported by the United States Geological Survey [1]. What Are "Semiprecious Stones"? The names "semiprecious stones" and "semi-precious stones" are used for all varieties of gemstones that are not categorized as "precious." Any gemstone suitable for being used in personal adornment would be included. Some people believe that the word "semiprecious" is derogatory, irreverent, misleading, or confusing, and that its use should be discontinued. They think that there should be "precious stones" and "other stones". Perhaps these people want to cast all gems but a "precious few" under a derogatory light? Unfortunately, eliminating the word "semiprecious" from use would be extremely difficult. Over the past 150 years, scores of popular books have been written with the word "semiprecious" in their titles. Today the terms appear repeatedly in thousands of books, magazines, web pages, and other documents published by companies in the gem and jewelry industry, government agencies, and the most influential institutions in gemology. Purging these terms from professional use would be difficult, but, eliminating them from common use would be nearly impossible - especially because some people really like these names. Does "Precious" Mean Valuable, Rare, Beautiful or Desirable? The division of gemstones into the categories of "precious" and "semiprecious" might give some people the idea that "precious stones" are more valuable, more rare, more beautiful, or more desirable than "semiprecious stones." Here are just three of the problems with calling some stones "precious" on the basis of their value, rarity, beauty or desirability. The "Value" Problem In 2004 the Aurora Australis Opal sold for $1 million, a price of over $5,500 per carat. High-quality 8x10 millimeter jade cabochons weigh about 2.5 carats and can sell for as much as $25,000. Gems cut from red beryl have been sold for over $10,000 per carat. These gems have much higher values per carat than most individual "precious stones" sold in the United States market. Their prices are higher than most diamonds of similar carat weight. These examples are clear evidence that semiprecious stones can be worth a lot of money. The "Rarity" Problem Many semiprecious stones are also more rare than precious stones. Red beryl, ammolite, benitoite, gem silica, demantoid garnet, tsavorite garnet, tanzanite, ametrine, and numerous other gems are all found in a fewer number of locations and produced in smaller quantities than any of the "precious" stones. They are incredibly rare in comparison, but that does not earn them the term "precious." The "Beauty" and "Desirability" Problems Beauty and desirability are both properties that are based upon the opinion of the observer. It would be interesting to present excellent specimens of diamond, ruby, emerald, sapphire and opal to a random cross section of people and ask them which, in their opinion, is the most beautiful or desirable. It is possible that opal, typically considered to be a "semiprecious stone" would win or place higher than the "precious stones" in the contest. A Consideration of "Grade" Furthermore, the words "precious" and "semiprecious" do not consider the "grade" of the stones. "Grade" is a general measure of gemstone quality and marketability that considers color, clarity, and potential price. Some rubies, sapphires, emeralds, and diamonds are of a grade that gives them a very low price - often low enough that large numbers of semiprecious stones will be more highly valued. For these reasons, the terms "precious stone" and "semiprecious stone" are arbitrary and meaningless. If the terms suddenly disappeared from the language of gems and jewelry, there would be no loss of accuracy and precision on common communication. At the same time, a bit of confusion would disappear with them. Perhaps then, the terms should be eliminated, but they are so entrenched in the industry and common usage that eliminating them would be essentially impossible. Focus On What Appeals to You A person who is interested in purchasing an item of jewelry should not be influenced by the names "precious" or "semiprecious." Instead they should focus on what gemstone appeals to them, suits their intended use, and has a price that they are willing to pay. The names "precious" and "semiprecious" are old and arbitrary designations that have never been truly meaningful. Personal Definitions of "Precious" A person who spends a week's pay to buy an amethyst ring should certainly think that their amethyst is "precious". A person who finds a wonderful piece of agate, cuts it into a cabochon, and sets it in a commercial mounting will think that it is precious. A child with a favorite tumbled stone has a precious possession. No one has a right to deny these people their thoughts. Happy Tumbling! RockTumbler.com Authors Hobart M. King has decades of rock tumbling experience and writes most of the articles on RockTumbler.com. He has a PhD in geology and is a GIA graduate gemologist. He also writes the articles about rocks, minerals and gems on Geology.com. We highly recommend: Modern Rock Tumbling by Steve Hart. Learning is the fastest way to improve the quality of rocks that you tumble. In this book you will learn from an expert with extensive experience. You will increase your abilities, learn to save time and money, and have a great reference book that you will use again and again. We highly recommend: Gemstones of the World (fifth edition) by Walter Schumann. One of the most popular gemstone books ever written, with over one million copies sold. It has about 100 pages of basic gemstone information and about 200 pages dedicated to photos and descriptions of over 100 gems and gem materials. Gemstone and Jewelry Book Only $18.99 We highly recommend: Gemstone Tumbling, Cutting, Drilling and Cabochon Making (by Jim Magnuson and Val Carver). This is our favorite book for a person who does rock tumbling and now wants to make beads, pendants, and other jewelry from tumbled stones. Includes an introduction to cabochon cutting.
A Classification of the 1800s Gemstones were first classified into the categories of "precious stones" and "semiprecious stones" in the mid-1800s. These terms quickly became popular and are now used throughout the world. Most people who sell gems and jewelry are very familiar with the terms and have used them. The terms remain in common usage today. They are used in perhaps a majority of books about gems. They are in thousands of websites. They are commonly used in discussions throughout the gemstone and jewelry industry. However, the words "precious" and "semiprecious" are controversial. Some people do not like them. These people think that all stones are precious. They believe that any stone of beauty is a "precious gem". How could anyone disagree with that? What Are "Precious Stones"? "Precious stones" is a name that is usually used in reference to four types of gems: diamonds, rubies, sapphires, and emeralds. Precious stones are usually transparent and cut by faceting - like the stones shown in the accompanying image. Some people have included opal, jade, or pearls in the "precious stones" class, but these have not received persistent and widespread use. Separating stones into "precious" and "semiprecious" classes has led many people to believe that "precious stones" are more important and more valuable than "semiprecious stones. " This idea is somewhat supported by the fact that diamonds, rubies, sapphires, and emeralds generally cost more per carat than semiprecious stones. Precious stones also account for over 98% of the dollar value of U.S. gemstone imports for consumption reported by the United States Geological Survey [1]. What Are "Semiprecious Stones"? The names "semiprecious stones" and "semi-precious stones" are used for all varieties of gemstones that are not categorized as "precious." Any gemstone suitable for being used in personal adornment would be included. Some people believe that the word "semiprecious" is derogatory, irreverent, misleading,
no
Petrology
Can Jasper be classified as a precious gemstone?
yes_statement
"jasper" is "classified" as a "precious" "gemstone".. "jasper" can be categorized as a "precious" "gemstone".
https://www.ga.gov.au/education/classroom-resources/minerals-energy/australian-mineral-facts/australian-gems
Australian gems | Geoscience Australia
The test of a good gemstone is its resistance to wear and tear. Using properties of minerals such as habit, shape, lustre, light refraction and specific gravity we can tell the difference between similar looking gemstones. Most gemstones are harder than quartz (Mohs scale greater than 7) and cannot be scratched by the blade of a knife. For example, diamond has a specific gravity of 3.52 and a cubic zirconia, which looks very similar, has a specific gravity of 5.80. This means that cubic zirconium is heavier than diamond. A two carat diamond is larger than a two carat cubic zirconia and very much more expensive. Precious, semi-precious or ornamental stones In the mid-1800s, gemstones were first classified as either ’precious’ or ‘semi-precious’. However, these divisions are not scientific and have never been truly meaningful. Diamonds, rubies, sapphires, and emeralds were originally considered the ‘precious stones’, but sometimes this category included opal, jade, or pearls. Gemstones that were referred to as semi-precious are used in jewellery and ornaments. These include: Many people in the gem and jewellery industry do not like the terms precious and semi-precious, because they do not take into account the grade of the gemstone. Precious gemstones are not always rarer or more valuable, than semi-precious gemstones. Gemmologists use grade as a general measure of gemstone quality, using the 4Cs (clarity, colour, cut, and carat) to determine the potential price. The 'beauty' of a gemstone is evaluated by examining how light is transmitted or refracted through the gem or reflected from the gem's surface. A gem can be coloured or have changing colour patterns, differing levels of transparency, lustre and brilliance. In addition, in some gems there is dispersion of light or 'fire'. Some of these properties are qualitative, so can be described rather than measured; and some are quantitative and can be measured using appropriate optical instruments. Another term sometimes used is ‘ornamental gemstone’. This term is used to describe minerals that lack transparency, but have attractive colours, textures and patterns such as jade, malachite, chalcedony and lapis lazuli. They are not all rare, and most have a hardness of less than 7 on Moh’s scale. The 4 Cs for gemstones Gemstones are valued according to four different criteria: clarity, colour, cut and carat (weight or size). Clarity is the quality most prized in gemstones. A perfect gemstone is a flawless, transparent crystal that sparkles brilliantly as it reflects light internally. Sometimes crystals contain inclusions which are impurities that distort the appearance of the gemstone. Some gemstones, such as star sapphires, pink diamonds and rutilated quartz are valued even more because of these inclusions. Inclusions can be used to identify if a gemstone is naturally formed or synthetically made. Clarity ranges from internally flawless to imperfect. Bright and intense colour will increase the value of a gemstone. Colourless beryl is only moderately valued, but emerald (green beryl) is one of the world’s most valued stones. Jade, turquoise and lapis lazuli have rich green and blue colours making them hugely sought after. Many gemstones acquire their colours due to their chemical composition and trace elements or impurities contained in the stones. Peridot (gemstone olivine) is commonly green due to its chemical composition, but can vary from pale lemon to dark olive green. Colourless diamonds are usually the most highly valued, however, diamonds tinted blue or pink from impurities are sometimes more valuable because they are so rare; many of the diamonds from the Argyle diamond mine in Western Australia are pink or champagne coloured, increasing their value. Most gemstones which are used for jewellery have been cut or faceted. If cut incorrectly gemstones will have less sparkle and consequently be poorer quality. The term carat refers to the weight of a gemstone. One carat = 200 milligrams (1/5 of a gram). Formation Unusual geological conditions are required to create gemstones, which is why they are so rare. Gemstones are often found in igneous rocks. Pegmatite, an intrusive igneous rock, may concentrate rare minerals to form gemstones such as beryl, ruby, sapphire, tourmaline and topaz. Intense metamorphism may create garnet, emerald, jade and lapis lazuli. Glossary The splitting of a single ray of light into two rays (also referred to as double refraction). Birefringent gemstones have two different refractive indices; this makes the optical phenomenon very useful for gemstone dealers to correctly identify certain man-made fakes from real gemstones. Carat The mass of a gemstone. One carat = 200 milligrams (1/5 of a gram). Crystal A solid mineral enclosed by symmetrically arranged planes. Crystal shape The shape that a crystallising mineral will take reflects the internal arrangement of its atoms and molecules. Crystal structure The arrangement of atoms or molecules in a material, creating a lattice exhibiting order and symmetry. Crystalline Having the structure and form of a crystal. Crystallisation The process by which crystals are formed. Element A pure chemical substance consisting of a single type of atom distinguished by its atomic number which is the number of protons in its atomic nucleus. Faceted When a crystal is cut with flat surfaces, it is said to be faceted. Fluorescence Emission of vissible light whem a mineral is exposed to radiation such as ultraviolet light or X-rays. Grains Particles of sediment ranging in size from tiny bits of clay to sand to enormous boulders. Sediments can be transported then deposited by water, wind or ice and this will wear the grains down to smaller sizes. Granite Common igneous rock usually composed of the minerals quartz, feldspar and biotite mica or hornblende. Granite is made of large crystals that grew slowly as magma cooled deep underground. Lustre Describes how light is reflected from a mineral’s surface (reflectivity). Magma Molten rock. Metamorphism The process of one rock changing to another rock because of heat and/or pressure. Mica A silicate mineral containing iron and magnesium. It forms flat sheets and has a shiny appearance and can be black or colourless. Mineral A naturally occurring, inorganic, substance with a reasonably fixed chemical composition and crystalline structure. Minerals usually have a crystalline form but not all crystals are made of rock-forming minerals (e.g. sugar). Molecule Contains a number of atoms held together by bonds and are the smallest complete unit of a substance i.e. an individual molecule of water contains three separate atoms held together by two bonds. Phosphorescence The emission of visible light for some time after the stimulating radiation causing fluorescence has been turned off. Quartz A relatively hard mineral made of silica (SiO2) and typically occurring as colourless or white hexagonal prisms. It is often coloured by impurities. Recrystallise A metamorphic process that occurs under situations of intense temperature and pressure where grains, atoms or molecules of a rock or mineral are packed closer together, creating a new crystal structure. The basic composition remains the same. Refraction The bending of light as it travels through materials with different densities. Rock Naturally occurring, solid aggregate of one or more minerals, or mineraloids. For example, the common rock granite is a combination of quartz, feldspar and biotite or amphibole minerals. Sediment Naturally occurring material that is broken down by processes of weathering and erosion, and is then transported by the action of wind, water, or ice, consisting or rock or mineral particles. Sheen A special visual effect observed in gems due to reflection of light from the internal structure of the stone. Streak The colour of powered mineral, often found by scratching the mineral on an unglazed white tile. Volcanic Igneous rocks that have formed from products of volcanic activity such as lava. Weathering This is the process in which the texture and composition of rocks, sediments and regolith change after being exposed at or near the Earth’s surface to weathering agents such as water, oxygen, organic acids and large temperature fluctuations. Weathering can be chemical or physical (mechanical) and includes changes by the effects of gravity, the atmosphere, the hydrosphere and/or the biosphere at normal temperatures and pressures. Contact information Geoscience Australia acknowledges the traditional owners and custodians of Country throughout Australia and acknowledges their continuing connection to land, waters and community. We pay our respects to the people, the cultures and the elders past and present.
Gemstones that were referred to as semi-precious are used in jewellery and ornaments. These include: Many people in the gem and jewellery industry do not like the terms precious and semi-precious, because they do not take into account the grade of the gemstone. Precious gemstones are not always rarer or more valuable, than semi-precious gemstones. Gemmologists use grade as a general measure of gemstone quality, using the 4Cs (clarity, colour, cut, and carat) to determine the potential price. The 'beauty' of a gemstone is evaluated by examining how light is transmitted or refracted through the gem or reflected from the gem's surface. A gem can be coloured or have changing colour patterns, differing levels of transparency, lustre and brilliance. In addition, in some gems there is dispersion of light or 'fire'. Some of these properties are qualitative, so can be described rather than measured; and some are quantitative and can be measured using appropriate optical instruments. Another term sometimes used is ‘ornamental gemstone’. This term is used to describe minerals that lack transparency, but have attractive colours, textures and patterns such as jade, malachite, chalcedony and lapis lazuli. They are not all rare, and most have a hardness of less than 7 on Moh’s scale. The 4 Cs for gemstones Gemstones are valued according to four different criteria: clarity, colour, cut and carat (weight or size). Clarity is the quality most prized in gemstones. A perfect gemstone is a flawless, transparent crystal that sparkles brilliantly as it reflects light internally. Sometimes crystals contain inclusions which are impurities that distort the appearance of the gemstone. Some gemstones, such as star sapphires, pink diamonds and rutilated quartz are valued even more because of these inclusions. Inclusions can be used to identify if a gemstone is naturally formed or synthetically made. Clarity ranges from internally flawless to imperfect.
no
Petrology
Can Jasper be classified as a precious gemstone?
yes_statement
"jasper" is "classified" as a "precious" "gemstone".. "jasper" can be categorized as a "precious" "gemstone".
https://copelandjewelers.com/precious-vs-semi-precious-gemstones/
Precious vs Semi-Precious Gemstones | Copeland Jewelers
Precious vs. Semi-Precious: Is My Tiny Diamond Worth More Than My Large Amethyst? You’ve heard the terms “precious” and “semi-precious” used to describe different types of gemstones, but what do they really mean? Diamonds are precious, so they must be worth more than semi-precious amethysts, right? Well, not necessarily. There are a lot of factors that go into determining the relative worth of different stones, and those two confusing terms really have nothing to do with it. Copeland Jewelers is here to set the record straight! Precious vs Semi-Precious Gemstones Believe it or not, “precious” and “semi-precious” are actually arbitrary terms when it comes to gemstones. In fact, the U.S. Federal Trade Commission has discussed banning these descriptors so as not to confuse consumers. The terms were originally coined in the mid-1800s by jewelry executives who wanted to make certain stones seem more valuable than others and also denote that they were popular and rare. The distinction between the two kinds of jewels is generally only used in the West. What Stones Are Considered Precious? Today, the only “precious” stones are: diamonds rubies sapphires emeralds What Stones Are Considered Semi-Precious? Opal, amethyst, and pearls were once classified as precious but are now categorized as semi-precious. Other semi-precious stones include: turquoise topaz garnet citrine jade peridot jasper rose quartz Materials used for jewelry that are not minerals, such as amber and jet, are also considered semi-precious stones. In fact, the list of semi-precious gems is incredibly long and includes many materials you’ve probably never heard of, such as algodonite, hayune, euxenite, and chicken-blood stone, just to name a few. Does the Classification of Precious or Semi-Precious Depend on Availability? You might be thinking that the terms “precious” and “semi-precious” at least have something to do with how abundant the different gems are, or how easy they are to find, right? Nope! Amethysts were downgraded once large deposits were discovered in South America in the nineteenth century. Diamonds, however, kept their “precious” classification even after large deposits were discovered in South Africa, also in the nineteenth century. The semi-precious gem tanzanite, however, can only be found in Tanzania and is much rarer than diamonds, emeralds, rubies, and sapphires. Alexandrites and tsavorite garnets are also very rare but considered “semi-precious.” Many semi-precious stones of high-quality command much greater prices than medium-quality precious stones, so it’s truly all relative. Is There a Different Way to Classify Gemstones That Isn’t so Confusing? Definitely. Gemologists label them based on their physical features, such as the type of crystals that form the stones or the chemicals that compose them. For example, diamonds are made of carbon. Gemstones are also categorized by their hardness, luster, specific gravity (a measure of density), dispersion and refractive index (measures of how light passes through the stones), and cleavage and fracture (patterns of the stones break). These systems of classifying jewels are based in science and have merit beyond marketing purposes. So Could Your Tiny Diamond Be Worth More Than Your Large Amethyst? It’s impossible to measure the actual worth of either stone or piece of jewelry without being able to examine them in person. Our suggestion, when it comes to choosing jewelry for yourself or someone else, is to focus on what you like (or what the recipient would like) rather than arbitrary terms that really have no meaning in the grand scheme of things. It’s good to understand what “precious” and “semi-precious” refer to, but don’t use them as the main basis for your jewelry purchase. Copeland Jewelers is pleased to educate our customers on the finer (and sometimes weirder) points of jewelry, but we’re mostly concerned with helping you choose a piece that will be treasured for years to come! Let us know how to reach you... By submitting this form, you are granting: Copeland Jewelers, 3801 N. Capital of Texas Hwy, Austin, Texas, 78746, United States, http://www.copelandjewelers.com/ permission to email you. You may unsubscribe via the link found at the bottom of every email. (See our Email Privacy Policy for details.) Emails are serviced by Constant Contact.
The distinction between the two kinds of jewels is generally only used in the West. What Stones Are Considered Precious? Today, the only “precious” stones are: diamonds rubies sapphires emeralds What Stones Are Considered Semi-Precious? Opal, amethyst, and pearls were once classified as precious but are now categorized as semi-precious. Other semi-precious stones include: turquoise topaz garnet citrine jade peridot jasper rose quartz Materials used for jewelry that are not minerals, such as amber and jet, are also considered semi-precious stones. In fact, the list of semi-precious gems is incredibly long and includes many materials you’ve probably never heard of, such as algodonite, hayune, euxenite, and chicken-blood stone, just to name a few. Does the Classification of Precious or Semi-Precious Depend on Availability? You might be thinking that the terms “precious” and “semi-precious” at least have something to do with how abundant the different gems are, or how easy they are to find, right? Nope! Amethysts were downgraded once large deposits were discovered in South America in the nineteenth century. Diamonds, however, kept their “precious” classification even after large deposits were discovered in South Africa, also in the nineteenth century. The semi-precious gem tanzanite, however, can only be found in Tanzania and is much rarer than diamonds, emeralds, rubies, and sapphires. Alexandrites and tsavorite garnets are also very rare but considered “semi-precious.” Many semi-precious stones of high-quality command much greater prices than medium-quality precious stones, so it’s truly all relative. Is There a Different Way to Classify Gemstones That Isn’t so Confusing?
no
Petrology
Can Jasper be classified as a precious gemstone?
yes_statement
"jasper" is "classified" as a "precious" "gemstone".. "jasper" can be categorized as a "precious" "gemstone".
https://geologyscience.com/minerals/silicates-minerals/jasper/
Jasper | Properties, Formation, Uses » Geology Science
Jasper is a type of mineral that is primarily composed of silica, with other trace elements and impurities giving it its unique colors and patterns. It is a member of the chalcedony family and is typically opaque, although some varieties can be translucent. Jasper is a common mineral that is found in many locations around the world, including the United States, Australia, Brazil, Egypt, and India. Jasper has been used by humans for thousands of years and has been found in archaeological sites dating back to the Neolithic period. It has been used for a variety of purposes, including as a tool for cutting and engraving, as a material for jewelry and decorative objects, and for ceremonial purposes. In some cultures, Jasper has been considered to have healing properties and has been used in traditional medicine. There are many different types of Jasper, each with its own unique color and pattern. Some of the most popular types of Jasper include Red Jasper, Picture Jasper, Dalmatian Jasper, Mookaite Jasper, Green Jasper, and Yellow Jasper. These different varieties of Jasper can be used for a variety of purposes, such as creating jewelry, decorative objects, and even as building materials. Jasper is also significant in art and literature, with many artists and writers incorporating Jasper into their works. In some cultures, Jasper has been considered to have spiritual and cultural significance, symbolizing strength, courage, and protection. Today, Jasper is still used in many different ways, from creating beautiful pieces of jewelry to being used as a material in building and construction. It continues to be a popular mineral due to its unique colors, patterns, and physical properties, as well as its cultural and historical significance. Formation Jasper is a type of chalcedony, which is formed from microscopic crystals of silica. It is formed in a variety of ways, but most commonly it is formed in sedimentary rocks where there is a high concentration of silica. Silica-rich fluids can flow through the porous rock, and over time, the silica can accumulate and form chalcedony deposits. The silica can come from a variety of sources, including volcanic ash, marine organisms, and mineral-rich groundwater. Jasper can also be formed through the process of silicification, which occurs when organic material, such as wood or bone, is replaced by silica. This can happen when the organic material is buried in sedimentary rock, and groundwater rich in silica flows through the rock and replaces the organic material with chalcedony. The result is a fossilized object with the appearance and physical properties of Jasper. The color and patterns of Jasper are determined by the presence of trace elements and impurities that are present in the silica-rich fluids during the formation process. For example, the presence of iron oxides can create the red coloration of Red Jasper, while the presence of manganese can create the spotted patterns in Dalmatian Jasper. Overall, the formation of Jasper is a complex and fascinating process that involves the accumulation and replacement of silica in sedimentary rocks over time. The resulting mineral is valued for its beauty and versatility, and continues to be used in a variety of applications today. Physical Properties Jasper is a mineral that has a number of unique physical properties that make it valuable for various applications. Here are some of the key physical properties of Jasper: Hardness: Jasper has a hardness of 6.5 to 7 on the Mohs scale, which means that it is relatively hard and can withstand scratching and abrasion. Density: Jasper has a density of around 2.5 to 2.9 g/cm³, which makes it relatively heavy compared to other minerals. Luster: Jasper has a dull to waxy luster, which means that it does not reflect light well and has a somewhat opaque appearance. Color: Jasper can be found in a wide range of colors, including red, green, yellow, brown, and black, among others. The colors are determined by the presence of trace elements and impurities during the formation process. Pattern: Many types of Jasper have distinct patterns or markings, such as the spots in Dalmatian Jasper or the swirling lines in Picture Jasper. Translucency: Jasper can range from opaque to translucent, depending on the specific type of Jasper. Refractive Index: The refractive index of Jasper is between 1.53 and 1.54, which means that it has a relatively low index of refraction. Cleavage: Jasper has no cleavage, which means that it does not break along any specific planes. These physical properties make Jasper a valuable mineral for a variety of applications, including jewelry-making, decorative objects, and even building materials. Its hardness and density make it durable and long-lasting, while its wide range of colors and patterns make it a versatile material for creative expression. Chemical Properties Jasper is primarily composed of silica, with trace amounts of other minerals and impurities giving it its distinctive color and patterns. Here are some of the key chemical properties of Jasper: Chemical formula: Jasper has the same chemical formula as chalcedony, which is SiO2. This means that it is composed of silicon and oxygen atoms in a ratio of 1:2. Mineral composition: Jasper is a member of the chalcedony family and is composed of microscopic crystals of silica. Impurities: The color and pattern of Jasper are determined by the presence of impurities, such as iron oxides, manganese, and other trace elements. Mohs hardness: Jasper has a Mohs hardness of 6.5 to 7, which means that it is relatively resistant to scratching and abrasion. Acid resistance: Jasper is resistant to most acids, although it can be affected by hydrofluoric acid. Density: The density of Jasper ranges from 2.5 to 2.9 g/cm³, which is relatively high compared to other minerals. Thermal properties: Jasper is a poor conductor of heat and electricity, which makes it useful in insulation applications. Optical properties: Jasper has a low index of refraction, which means that it does not bend light as much as other minerals. Overall, the chemical properties of Jasper are relatively simple, with the mineral being primarily composed of silica with trace amounts of other impurities. These impurities give Jasper its distinctive colors and patterns, which make it a valuable mineral for a wide range of applications. Optical Properties Jasper has a number of interesting optical properties that make it valuable for various applications. Here are some of the key optical properties of Jasper: Refractive index: The refractive index of Jasper ranges from 1.53 to 1.54, which is relatively low compared to other minerals. Birefringence: Jasper has a low birefringence, which means that it does not split light into two polarized beams like some other minerals. Dispersion: Jasper has a very low dispersion, which means that it does not break light into its component colors like diamonds or other highly dispersive minerals. Transparency: The transparency of Jasper varies depending on the specific type of Jasper, but it is generally opaque to semi-translucent. Pleochroism: Jasper does not exhibit pleochroism, which means that it does not show different colors when viewed from different angles. Fluorescence: Some types of Jasper, such as Green Jasper, may exhibit fluorescence under ultraviolet light. Color: The color of Jasper is determined by the presence of impurities, such as iron oxides or manganese, and can range from red to green to yellow to brown, among others. Overall, the optical properties of Jasper are relatively simple, with the mineral having a low refractive index and dispersion, and generally being opaque or semi-translucent. However, the unique colors and patterns of Jasper make it a valuable and beautiful mineral for a variety of decorative and ornamental applications. Physical and visual differences between types of Jasper The physical and visual differences between the types of Jasper are mainly determined by the mineral impurities present in the stone, which affect its color and pattern. Here are some of the key physical and visual differences between some common types of Jasper: Red Jasper: Red Jasper is typically a deep red to reddish-brown color, with occasional banding or swirling patterns in lighter shades of red or orange. It often has a smooth or waxy texture, and may have a matte or glossy finish. Green Jasper: Green Jasper is typically a dark green to light green color, often with swirling patterns of lighter green or white. It may have a smooth or rough texture, and may have a matte or glossy finish. Yellow Jasper: Yellow Jasper is typically a pale yellow to dark yellow color, with occasional banding or swirling patterns in lighter shades of yellow or orange. It often has a smooth or waxy texture, and may have a matte or glossy finish. Picture Jasper: Picture Jasper has distinct patterns and markings that resemble landscapes or other images, such as mountains, trees, or rivers. These patterns may be in shades of brown, beige, black, or red, and may have a matte or glossy finish. Dalmatian Jasper: Dalmatian Jasper is typically a white to beige color with black spots or markings, resembling the coat of a Dalmatian dog. It may have a smooth or rough texture, and may have a matte or glossy finish. Ocean Jasper: Ocean Jasper is typically a mix of shades of green, blue, and white, with swirling patterns resembling ocean waves or bubbles. It often has a smooth or waxy texture, and may have a matte or glossy finish. Brecciated Jasper: Brecciated Jasper is typically a mix of colors and patterns, often formed from broken fragments of other minerals or rocks. It may have a rough texture, and may have a matte or glossy finish. These physical and visual differences make each type of Jasper unique and valuable for a variety of decorative and ornamental purposes, as well as for their believed healing and grounding properties. Uses of Jasper Jasper has been used for a wide range of purposes throughout history, from decorative and ornamental to healing and spiritual. Here are some of the most common uses of Jasper: Decorative purposes: Jasper is often used for decorative purposes due to its unique colors and patterns. It is commonly used in jewelry, sculpture, and other artistic pieces. Healing and spiritual purposes: Jasper has long been believed to have healing and grounding properties. It is often used in crystal healing, meditation, and other spiritual practices to promote balance and well-being. Building and construction: Jasper has been used as a building material for centuries due to its durability and resistance to weathering. It has been used in everything from walls and floors to decorative elements such as columns and sculptures. Jewelry: Jasper is a popular gemstone for jewelry due to its unique colors and patterns. It is commonly used in necklaces, bracelets, earrings, and other jewelry pieces. Industrial uses: Jasper is sometimes used in industrial applications such as abrasives, as it is a hard and durable material. Ornamental purposes: Jasper is often used for ornamental purposes, such as decorative bowls, vases, and other home decor items. Overall, Jasper is a versatile and valuable mineral with a wide range of uses and applications, from decorative and ornamental to healing and spiritual. Mining and Production Jasper is a common mineral that is found all over the world, and as such, mining and production methods may vary depending on the location and type of Jasper being extracted. However, here are some general steps involved in mining and production of Jasper: Exploration: Mining companies will first explore the area to determine if there is a suitable deposit of Jasper to mine. This may involve mapping the area and taking samples to determine the quality and quantity of Jasper available. Mining: Once a suitable deposit of Jasper has been identified, the mining process begins. This may involve open-pit mining, underground mining, or a combination of both depending on the location and type of Jasper being mined. Mining equipment such as bulldozers, excavators, and dump trucks are used to extract the Jasper from the ground. Crushing and Grinding: The extracted Jasper is then crushed and ground into smaller pieces to make it easier to transport and process further. Sorting and Classification: After crushing and grinding, the Jasper is sorted and classified according to its quality and grade. This may involve separating out impurities and categorizing the Jasper by color and pattern. Finishing: Once the Jasper has been sorted and classified, it may undergo further finishing processes such as polishing, cutting, or shaping to produce finished products such as jewelry or decorative objects. Distribution: The finished Jasper products are then distributed to wholesalers and retailers for sale to customers. Overall, the mining and production of Jasper involves several steps and processes, from exploration and mining to sorting and finishing, before the final products are ready for distribution and sale. Locations where Jasper can be found Jasper is a mineral that can be found all over the world. It is a common mineral and is found in a variety of geological settings. Here are some of the most notable locations where Jasper can be found: Australia: Australia is a major producer of Jasper, with deposits found in several locations including Western Australia, Queensland, and the Northern Territory. Brazil: Brazil is another major producer of Jasper, with deposits found in several regions including Minas Gerais and Rio Grande do Sul. India: Jasper is found in several locations in India, including Rajasthan, Maharashtra, and Madhya Pradesh. Madagascar: Jasper is found in Madagascar, with some of the most notable deposits located in the Bongolava region. South Africa: Jasper is found in several locations in South Africa, including the Northern Cape and Mpumalanga. United States: Jasper is found in several states in the United States, including Oregon, Idaho, California, and Arizona. Russia: Jasper is found in several regions in Russia, including the Urals and Siberia. Mexico: Jasper is found in several regions in Mexico, including Sonora and Chihuahua. Overall, Jasper can be found in many locations around the world, and the quality and characteristics of the mineral can vary depending on the location and geological setting where it is found. Summary of key points about Jasper Jasper is an opaque variety of chalcedony that is typically red, brown, yellow, or green in color. It is often striped, spotted, or veined and may contain other minerals such as iron, quartz, or calcite. Jasper is found all over the world, with major deposits in Australia, Brazil, India, Madagascar, South Africa, the United States, Russia, and Mexico. Jasper has been used for a wide range of purposes throughout history, including decorative, ornamental, healing, spiritual, building and construction, jewelry, and industrial uses. The physical and visual differences between the types of Jasper are mainly determined by the mineral impurities present in the stone, which affect its color and pattern. Jasper is a hard and durable material that is resistant to weathering, making it a popular building material for walls, floors, and decorative elements such as columns and sculptures. Jasper is commonly used in crystal healing, meditation, and other spiritual practices to promote balance and well-being. The mining and production of Jasper involves several steps and processes, from exploration and mining to sorting and finishing, before the final products are ready for distribution and sale. FAQ Q: What is Jasper used for? A: Jasper has been used for a wide range of purposes throughout history, including decorative, ornamental, healing, spiritual, building and construction, jewelry, and industrial uses. Q: What colors does Jasper come in? A: Jasper can come in a variety of colors, including red, brown, yellow, green, and other earth tones. The colors and patterns of Jasper are determined by the mineral impurities present in the stone. Q: Where can Jasper be found? A: Jasper is found all over the world, with major deposits in Australia, Brazil, India, Madagascar, South Africa, the United States, Russia, and Mexico. Q: Is Jasper a valuable gemstone? A: Jasper is not considered a precious gemstone like diamonds or rubies, but it is still highly valued for its unique colors and patterns. The value of Jasper depends on its quality, rarity, and size. Q: How do you care for Jasper jewelry? A: To care for Jasper jewelry, it is recommended to clean it with a soft, damp cloth and mild soap. Avoid exposing Jasper to harsh chemicals, extreme temperatures, and direct sunlight, as this can cause discoloration or damage. Q: Can Jasper be used for building and construction? A: Yes, Jasper is a hard and durable material that is resistant to weathering, making it a popular building material for walls, floors, and decorative elements such as columns and sculptures. Q: Is Jasper used in crystal healing? A: Yes, Jasper is commonly used in crystal healing, meditation, and other spiritual practices to promote balance and well-being. Different colors and patterns of Jasper are believed to have different healing properties. Q: Is Jasper a rare mineral? A: Jasper is a common mineral and is found in many locations around the world. However, certain varieties of Jasper, such as Imperial Jasper or Ocean Jasper, can be rare and highly valued. Q: What is the difference between Jasper and Agate? A: Jasper and Agate are both varieties of chalcedony, and the main difference between the two is their pattern and color. Jasper typically has a more opaque and solid color, while Agate has a translucent or banded appearance. Q: How is Jasper formed? A: Jasper is formed from silica-rich sedimentary rocks that have been subjected to intense pressure and heat over time. The mineral impurities present in the stone determine the color and pattern of the Jasper. Q: Can Jasper be polished? A: Yes, Jasper can be polished to a high shine and is often used for decorative purposes, such as vases, sculptures, and other ornamental objects. Q: Can Jasper be dyed? A: Yes, Jasper can be dyed to enhance its color or create a more uniform appearance. However, some people prefer to use natural, undyed Jasper for its unique and varied colors and patterns. Q: Is Jasper a birthstone? A: Jasper is not an official birthstone, but it is sometimes used as an alternative or secondary birthstone for the month of October. Q: Is Jasper a fossil? A: No, Jasper is not a fossil, but it is often found in sedimentary rocks that contain fossils. Some types of Jasper, such as Picture Jasper, can contain images or patterns that resemble fossils or other natural scenes.
The colors and patterns of Jasper are determined by the mineral impurities present in the stone. Q: Where can Jasper be found? A: Jasper is found all over the world, with major deposits in Australia, Brazil, India, Madagascar, South Africa, the United States, Russia, and Mexico. Q: Is Jasper a valuable gemstone? A: Jasper is not considered a precious gemstone like diamonds or rubies, but it is still highly valued for its unique colors and patterns. The value of Jasper depends on its quality, rarity, and size. Q: How do you care for Jasper jewelry? A: To care for Jasper jewelry, it is recommended to clean it with a soft, damp cloth and mild soap. Avoid exposing Jasper to harsh chemicals, extreme temperatures, and direct sunlight, as this can cause discoloration or damage. Q: Can Jasper be used for building and construction? A: Yes, Jasper is a hard and durable material that is resistant to weathering, making it a popular building material for walls, floors, and decorative elements such as columns and sculptures. Q: Is Jasper used in crystal healing? A: Yes, Jasper is commonly used in crystal healing, meditation, and other spiritual practices to promote balance and well-being. Different colors and patterns of Jasper are believed to have different healing properties. Q: Is Jasper a rare mineral? A: Jasper is a common mineral and is found in many locations around the world. However, certain varieties of Jasper, such as Imperial Jasper or Ocean Jasper, can be rare and highly valued. Q: What is the difference between Jasper and Agate? A: Jasper and Agate are both varieties of chalcedony, and the main difference between the two is their pattern and color. Jasper typically has a more opaque and solid color, while Agate has a translucent or banded appearance. Q: How is Jasper formed?
no
Petrology
Can Jasper be classified as a precious gemstone?
yes_statement
"jasper" is "classified" as a "precious" "gemstone".. "jasper" can be categorized as a "precious" "gemstone".
https://www.tucsonbeads.com/blogs/news/a-brief-overview-of-semi-precious-gemstones
A brief overview of Semi Precious Gemstones! | Tucson Beads
A brief overview of Semi Precious Gemstones! Semi precious stones, also known as gemstones, are naturally occurring minerals that are valued for their beauty and rarity. These stones are different from precious stones, such as diamonds, emeralds, rubies, and sapphires, which are more valuable due to their rarity and exceptional qualities. Semi precious stones are commonly used in jewelry making, and their value depends on factors such as color, clarity, and rarity. It is better to buy semi precious stones wholesale since the price of bulk purchase of these stones are quite low as compared to single purchase. Semi precious gemstones, also known as colored gemstones, are minerals that are valued for their beauty and rarity, but are not as rare or valuable as precious gemstones such as diamonds, emeralds, rubies, and sapphires. Some examples of semi precious gemstones include: Amethyst - a purple variety of quartz Aquamarine - a blue-green variety of beryl Citrine - a yellow variety of quartz Garnet - a group of minerals that come in various colors Peridot - a green variety of olivine Topaz - a variety of silicate mineral that can come in various colors Turquoise - a blue-green mineral that is often used in jewelry Opal - a gemstone with a unique play of colors Tourmaline - a gemstone that can come in various colors, including green, pink, and blue Jasper - a gemstone with a variety of colors and patterns. Semi precious gemstones for sale are commonly used in jewelry making, and their value depends on factors such as color, clarity, and rarity. Some semi precious gemstones, such as certain types of jade and lapis lazuli, can be quite valuable and highly prized. Semi precious beads refer to beads made from minerals that are not classified as precious stones such as diamonds, emeralds, rubies, and sapphires. Instead, semi-precious stones include minerals like amethyst, citrine, turquoise, agate, jasper, garnet, and many others. Semi precious beads are popular materials for jewelry making and crafting because they offer a wide range of colors, patterns, and textures that can enhance any design. They can be used to make necklaces, bracelets, earrings, and other jewelry pieces. When purchasing semi precious beads, it is important to consider factors such as the quality, size, shape, and color of the beads. You may also want to consider the origin of the stones and whether they have been treated or enhanced in any way. It is always a good idea to buy from a reputable source to ensure that you are getting high-quality semi precious beads for your project. Semi precious stone beads are small, polished pieces of natural minerals that are used in jewelry making. These beads come in a variety of shapes, sizes, and colors, and can be strung together to create bracelets, necklaces, earrings, and other types of jewelry. Semi precious stone beads are popular in jewelry making because of their affordability, versatility, and natural beauty. They can be combined with other materials such as glass beads, metal findings, and leather cords to create unique and personalized pieces of jewelry. Semi precious gemstone beads are beads made from semi-precious gemstones, which are minerals that are not classified as precious stones. These gemstones include stones such as amethyst, citrine, garnet, turquoise, jasper, agate, and many others. Gemstone semi precious beads can add beauty and uniqueness to any jewelry design, and their affordability compared to precious gemstones makes them a popular choice for jewelry makers and enthusiasts. Sign in Sign up A password reset email has been sent to the email address on file for your account, but may take several minutes to show up in your inbox. Please wait at least 10 minutes before attempting another reset. Lost your password? Please enter your email address. You will receive a link to create a new password via email. Email address First Name Last Name Email * Password * Your personal data will be used to support your experience throughout this website, to manage access to your account, and for other purposes described in our privacy policy. No account yet? Registering for this site allows you to access your order status and history. Just fill in the fields below, and we’ll get a new account set up for you in no time. We will only ask you for information necessary to make the purchase process faster and easier.
They can be used to make necklaces, bracelets, earrings, and other jewelry pieces. When purchasing semi precious beads, it is important to consider factors such as the quality, size, shape, and color of the beads. You may also want to consider the origin of the stones and whether they have been treated or enhanced in any way. It is always a good idea to buy from a reputable source to ensure that you are getting high-quality semi precious beads for your project. Semi precious stone beads are small, polished pieces of natural minerals that are used in jewelry making. These beads come in a variety of shapes, sizes, and colors, and can be strung together to create bracelets, necklaces, earrings, and other types of jewelry. Semi precious stone beads are popular in jewelry making because of their affordability, versatility, and natural beauty. They can be combined with other materials such as glass beads, metal findings, and leather cords to create unique and personalized pieces of jewelry. Semi precious gemstone beads are beads made from semi-precious gemstones, which are minerals that are not classified as precious stones. These gemstones include stones such as amethyst, citrine, garnet, turquoise, jasper, agate, and many others. Gemstone semi precious beads can add beauty and uniqueness to any jewelry design, and their affordability compared to precious gemstones makes them a popular choice for jewelry makers and enthusiasts. Sign in Sign up A password reset email has been sent to the email address on file for your account, but may take several minutes to show up in your inbox. Please wait at least 10 minutes before attempting another reset. Lost your password? Please enter your email address.
no
Petrology
Can Jasper be classified as a precious gemstone?
yes_statement
"jasper" is "classified" as a "precious" "gemstone".. "jasper" can be categorized as a "precious" "gemstone".
https://irisgems.com/pages/semi-precious-gemstones
What are semi-precious gemstones? – Iris Gems
What are semi-precious gemstones? Introduction The ornamental use of jewellery and gemstones is profoundly rooted in human culture. They took on intimate, public, and theological meanings for men, growing in social significance as human societies progressed. We seem to be especially drawn to gemstones. They have religious, healing, prestige, political, and wealth connotations and have become icons of both ancient and modern societies. It is not surprising that the term precious was applied to the rarest and most sought after gems in antiquity. The words precious and semi-precious became popular in the nineteenth century. "Precious" denoted the most valuable gemstones, while "semi-precious" denoted the remaining exemplars important to jewellery. It all comes down to rarity and craftsmanship to qualify as a valuable semi-precious stone. When amethysts were scarce, they were considered precious; however, once large deposits of amethysts were discovered in many parts of the world, this gemstone lost its status as a precious stone. Diamonds, sapphires, rubies, and emeralds are precious stones in the Western world. The other stones are classified as semi-precious. The distinction between “precious stone” and “semi-precious stone” is a commercial one that isn't always applicable. They are words that were coined as a selling tactic by those trying to sell precious stones. There are numerous examples of semi-precious stones that command a premium. What are semi-precious gemstones? A semi-precious gemstone is any gemstone that is not a diamond, ruby, emerald, or sapphire. The term "semi-precious" gemstone does not imply that it is less valuable than precious gemstones. Semi-precious gemstones are simply more plentiful (but there are a few exceptions). Semi-precious gemstones are valued primarily based on their colour, availability, and consistency. The terms "semiprecious stones" and "semi-precious stones" refer to all gemstones that are not classified as "precious." Any gemstone that can be used for personal adornment will be included. Semiprecious stones include gemstones crafted from agate, amber, amethyst, aquamarine, aventurine, chalcedony, chrysocolla, chrysoprase, citrine, garnet, hematite, jade, jasper, jet, kunzite, lapis lazuli, malachite, moonstone, obsidian, onyx, peridot. Some people claim that the term "semiprecious" is insulting, provocative, deceptive, or ambiguous and should be phased out. They believe that "precious stones" and "other stones" should be separated. But it will be tremendously difficult to get rid of the term "semiprecious." Hundreds of famous books with the term "semiprecious" in their titles have been published over the last 150 years. Today, the word can be found in thousands of books, journals, websites, and other documents written by gem and jewellery firms, government agencies, and essential gemological institutions. Characteristics of semi-precious gems Before purchasing a gemstone from a local store or online, it is important to recognise its properties and benefits. The characteristics of gemstones are discussed further below. The colour of a gemstone is what distinguishes and adds charm to it. There are colourless gemstones as well, but they are less common, except for a diamond. Most gemstones are translucent, but others, such as opal, coral, and lapis lazuli, are opaque. Clarity is also another essential attribute of a gemstone. Many gemstones have inclusions because they are formed under various conditions and may have mineral deposits or fissures. This, however, does not diminish the elegance or importance of a gemstone. It is sporadic for a gemstone to have very little to no inclusions, and they will be costly. Oval, round, or cushion-shaped gemstones are all possible. The cut of the gemstone improves its overall elegance and value. A great gemstone cut is one that precisely emits the gem's colour, minimises inclusions, and makes the gemstone symmetrical and proportionate. The volume and quality of light reflected from a gemstone's surface determines how lustrous it is. The lustre's shine enhances the elegance of the gemstones, making it an important consideration when selecting gems for jewellery design. Its size does not determine the carat weight of a gemstone. Since the carat value of a gemstone is determined by its density, two gemstones of the same size have different carat weights. The density of the gemstone differs from one stone to the next. Does the term ‘semi-precious’ stones mean less valuable and less preferable? The classification of gemstones as "precious" and "semiprecious" can lead some people to believe that "precious stones" are more valuable, rare, beautiful, or desirable than "semiprecious stones." Here are only three of the issues referring to specific stones as "precious" because of their importance, rarity, appearance, or desirability. Semi-precious gems have much higher per-carat prices than another individual "precious stones" sold in the US market. Their values are higher than those of other diamonds of comparable carat weight. These examples demonstrate unequivocally that semiprecious stones can be extremely valuable. The Aurora Australis Opal sold for $1 million in 2004, a price of more than $5,500 per carat. High-quality 8x10 millimetre jade cabochons weighing around 2.5 carats will fetch up to $25,000. Red beryl gems have sold for more than $10,000 per carat. Many semiprecious stones are more valuable than precious stones. Red beryl, ammolite, benitoite, gem silica, demantoid garnet, tsavorite garnet, tanzanite, ametrine, and various other gems are found in fewer places and manufactured in smaller amounts than "precious" stones. They are extremely rare in comparison, but that does not qualify them as "precious." Beauty and desirability are also subjective properties dependent on the observer's perception. It would be fascinating to show excellent samples of diamond, ruby, emerald, sapphire, and opal to a random spectrum of people and ask them the most attractive or valuable in their opinion. It is likely that opal, traditionally called a "semiprecious stone," would win or place higher in the competition than the "precious stones." Why are semi-precious gemstones so popular in jewellery making? Semi-precious stone jewellery is very common among both sexes. Millions of buyers and sellers all over the world are interested in these jewels. Exquisite gemstones are used to create various jewellery, including delicate and priceless necklaces, earrings, bangles, bracelets, and anklets. The rarity of the stone determines the price of semi-precious stone jewellery. Though it is not as pricey as the original stone jewellery, it has its distinct value today. Most of these stones are more complex than precious stones and therefore do not crack easily. These stones' clarity and colour are equally significant. The price of semi-precious stone jewellery is often determined by how light passes through it. The way light travels through gemstones determines the clarity of the stone. Gemstone earrings and pendants are among the most exquisite and elegant types of jewellery available in many top jewellery stores. These stones are also used in the making of custom jewellery and wedding jewellery. Women wear semi-precious stone jewellery to most formal occasions, such as weddings, social gatherings, and parties. Since these jewels complement various outfits, they can be worn to enhance the look of both an ensemble and the wearer. Aquamarine and opals, for example, stand out due to their unrivalled elegance and glow. Opal jewellery is one of the most appealing and unique types of jewellery. Some common semi-precious gemstones Some of the semi-precious stones include- Pearls are hard artefacts formed in the soft tissue of a shelled mollusk or conulariid. Pearls have a Mohs hardness rating of 2.5 to 4.5. Black onyx is chalcedony, a type of microcrystalline quartz, and is considered the anniversary gemstone for the tenth year of marriage. On the Mohs scale, black onyx has a hardness of 6.5 to 7. Opal is a type of silica that looks like glass but is chemically similar to quartz. A variable amount of water can be contained within this mineral. Opal's hardness varies from 5.5 to 6.5 on the Mohs scale. Blue topaz is a lovely stone that comes in a wide variety of bright blue hues. It is a silicate mineral that contains aluminium and fluorine. Moonstone is a lustrous mineral composed of sodium potassium aluminium silicate. The sheen effect is caused by light diffraction from inside the microstructure, made up of feldspar layers. Conclusion An individual considering buying jewellery should not be swayed by the terms "precious" or "semiprecious." Instead, they should consider the gemstone appeals to them, is appropriate for its intended use, and has a price that they are ready to pay. The words "precious" and "semiprecious" are old and arbitrary descriptors that have never really meant anything.
The other stones are classified as semi-precious. The distinction between “precious stone” and “semi-precious stone” is a commercial one that isn't always applicable. They are words that were coined as a selling tactic by those trying to sell precious stones. There are numerous examples of semi-precious stones that command a premium. What are semi-precious gemstones? A semi-precious gemstone is any gemstone that is not a diamond, ruby, emerald, or sapphire. The term "semi-precious" gemstone does not imply that it is less valuable than precious gemstones. Semi-precious gemstones are simply more plentiful (but there are a few exceptions). Semi-precious gemstones are valued primarily based on their colour, availability, and consistency. The terms "semiprecious stones" and "semi-precious stones" refer to all gemstones that are not classified as "precious." Any gemstone that can be used for personal adornment will be included. Semiprecious stones include gemstones crafted from agate, amber, amethyst, aquamarine, aventurine, chalcedony, chrysocolla, chrysoprase, citrine, garnet, hematite, jade, jasper, jet, kunzite, lapis lazuli, malachite, moonstone, obsidian, onyx, peridot. Some people claim that the term "semiprecious" is insulting, provocative, deceptive, or ambiguous and should be phased out. They believe that "precious stones" and "other stones" should be separated. But it will be tremendously difficult to get rid of the term "semiprecious." Hundreds of famous books with the term "semiprecious" in their titles have been published over the last 150 years. Today, the word can be found in thousands of books, journals, websites, and other documents written by gem and jewellery firms, government agencies, and essential gemological institutions.
no
Petrology
Can Jasper be classified as a precious gemstone?
no_statement
"jasper" is not "classified" as a "precious" "gemstone".. "jasper" cannot be considered a "precious" "gemstone".
https://trulyexperiences.com/blog/purple-gemstones/
Guide to Purple Gemstones - List of Names, Meanings & Pictures
Guide to Purple Gemstones – List of Names, Meanings & Pictures Purple gemstones have long been adored, and for good reason. There’s something about purple that intrigues people, and the colour is often associated with royalty, wealth, passion, power, luxury, ambition and magic! While naturally occurring, purple gemstones are very rare, there are a few interesting and beautiful stones that have been found in this enigmatic hue. Historically, purple gemstones have been renowned for having deeply spiritual powers that help the stone’s owners heal from various ailments. Purple gemstones are also thought to have an effect on mindset and are believed to inspire clarity in one’s thoughts. Of course, as with any gemstone, the hues of purple vary from stone to stone and no two gems are alike. While gorgeous variants of purple like lavender, lilac and mauve are more commonly found, it’s the deepest colours that are most rare, and as a result, the most expensive variants of gemstones that one can find. If you’re passionate about purple, take a look at the dazzling options you have to choose from. List of Purple Gemstones Purple Diamonds No list of impressive gemstones would be complete without the mention of the most popular stone of all – the diamond. Diamonds can be found in a wide range of colours, and of course, purple is one of them. For a purple diamond to be formed, there need to be extremely high amounts of hydrogen present during its formation. Because of this, purple diamonds rank amongst the world’s rarest coloured diamonds. A purple diamond of a single carat can typically fetch tens of thousands of dollars. The higher the quality of the diamond, the deeper the shade of purple. While finding purple diamonds may not be a common occurrence, some of the names given to various shades of the purple stone include lilac, grape, orchid and lavender. Purple diamonds are most commonly found in Russia and Australia, although they have recently been discovered in Canada too. Most famous purple diamonds: Purple Orchid: One of the biggest purple diamonds ever discovered was by an undisclosed South African mine. This diamond was called the Purple Orchid and was first unveiled in 2014 at the Hong Kong Jewellery and Gem Fair. The stone was four carats when rough, but when polished, was revealed to be a magnificent 3.37 carat beauty. The Royal Purple Heart has been described as “a true work of art.” However, it’s shrouded in mystery. Weighing in at a massive 7.34 carats, it’s a fancy vivid purple heart-cut stone. The Julius Klein Diamond Corporation is responsible for cutting the stone and accentuating the brilliance of its colour. While it’s thought to be the largest purple stone ever discovered, not much else is known about the Royal Purple Heart. It’s unclear where it was found, as well as who currently owns it and how much they paid for it. Supreme Purple star: First appearing in London in 2002 was the Supreme Purple star, which is often classified as ‘The King Of All Purple Diamonds”. If looked at from different angles, it can appear either purple or crimson. This is the first known diamond to have two colours appearing in such a unique way. Amethyst Perhaps the most easily recognisable purple gemstone is the amethyst. During ancient times, this stone was thought to be just as precious as rubies, emeralds and diamonds. Amethyst stones were held in such high esteem until large deposits were found in Brazil, which is when they became more common and accessible. Amethyst can be found in all shades of purple, and those with the darkest hues are regarded as the highest quality. These gemstones are often used in jewellery as the colour blends well with both neutral and colourful tones. Interestingly, if amethyst is exposed to direct sunlight for too long, its colour can fade. Amethysts are durable enough but require a great deal of effort in order to ensure they maintain their colour and lustre. However, if they are well maintained, these precious stones can last a lifetime. Amethyst meaning & properties The name amethyst is derived from the Greek word ametusthos, meaning “not intoxicated”. Legend has it that the purple gemstone has the power to protect its owner from drunkenness and overindulgence. Amethyst is also the stone of St. Valentine, who wore a purple amethyst ring with an engraving of Cupid on it. A symbol of faithful love, today the purple gem marks Valentine’s Day and is the birthstone for February. Special occasions: Iolite Iolites are highly sought-after purple gemstones. They are truly beautiful and rival the beauty of more expensive, highly regarded stones such as sapphires or tanzanite. Iolites are brilliant stones that occur in lovely blue-purple shades. However, due to their abundance, they are not highly valued. Iolite is also susceptible to chips and cracks if struck with force or dropped. Regardless, it’s quite a hard stone and works well in almost every kind of jewellery. When mounted in rings, iolite needs to be set protectively in a bezel or halo mould. Iolite is wonderfully sparkly and has an eye-catching brilliance about it. This is why it is well served in jewellery that easily catches the light, such as rings and earrings. Iolite is known as Cordierite, a name fashioned after the French mineralogist Pierre Antoine Cordier. Another name for iolite is Dichroite, which is the Greek word for ‘two colours.’ From certain angles this stone looks as though it has more than one shade of purple in it. Iolite meaning & properties Iolite comes from the Greek word ‘ios’, which means violet. Due to its shifting shades of violet-blue, iolite is associated with travel, exploration, and illumination. Iolite is also referred to as the ‘Viking’s Compass’. Legend has it that Viking explorers used iolite to help navigate their ships in the open seas. Using a thin piece of iolite as the world’s first polarizing filter, they would determine the direction of the sun to help guide them on their way. Special occasions: Purple Sapphire Most of us associate sapphire with blue gemstones, but purple sapphires are a lot rarer and more valuable. Purple sapphires are formed when elements such as chromium are present during its formation. It’s not uncommon for people to confuse sapphires for amethysts, but the former is a lot harder and more durable. In fact, purple sapphires are second only to diamonds when it comes to durability. They are extremely resistant to breakage and chipping. Most other sapphires on the market have been heat-treated to enhance their colour and clarity. But purple sapphires are generally not treated due to their fantastic natural colouring. Due to their remarkable natural durability, purple sapphires can be used in all kinds of jewellery, from rings that are easily bumped around to earrings that rarely move. Sapphire meaning & properties Purple sapphire is known as the “stone of awakening”, said to increase spiritual power and consciousness. The purple gemstone symbolises wisdom and astuteness, bringing its wearer good fortune and spiritual insight. In the Middle Ages, the gem was believed to protect those close to you from harm. Sapphire is a protection stone and also symbolises loyalty and trust. Special occasions: Purple Jasper Stone Mainly known for being a blue gemstone, jasper is found in various shades of purple too. What makes jasper stand out from other gemstones is the fact that it has a unique matrix of patterns and veins. Purple jasper is a member of the chalcedony family of gemstones and is barely faceted. Purple jasper is a result of blue and red jasper blending together, which is why it’s such a warm shade. This precious stone is commonly used in statement pieces and fine jewellery as its colour is so impactful. It’s widely known to be used in jewellery owned by some of the world’s elite royals. If well cared for, purple jasper can last for several decades without losing its brilliance or colour. Purple jasper meaning & properties In terms of gemology, purple jasper represents royalty, honour, dignity, status and power, given the stone’s frequent use by royal jewellers. Purple jasper is also known as ‘the stone of bonding.’ It is thought to be able to take multiple energies at once and unite them. Crystal and spiritual healers believe that purple jasper reduces contradictions and makes way for spirituality, allowing the stone’s holder to bond better with their loved ones. Purple Tourmaline Tourmaline is a fairly popular gemstone, but, purple tourmaline is not easily found. While many people aren’t even aware of the existence of purple tourmaline, these stones are renowned by gemologists for their brilliance. They have the ability to absorb light in different ways from different angles, a phenomenon known as pleochroism. Jewellers tend to facet purple tourmaline as a means of adding to their pleochroism. Seeing as how the stone is not a particularly hard one, purple tourmaline often undergoes heat treatment in order to strengthen it. The more heavily treated it is, the less valuable it becomes. Purple tourmaline meaning & properties Purple tourmaline also goes by the name of Siberite. It’s believed to be able to promote serene energy, ground its wearer, and help you relax. Purple tourmaline can also release emotional attachments that no longer serve you and help manage headaches and migraines. The Power of Purple Gems While purple gemstones aren’t as common as stones of other colours, they remain popular because of how rare they are, and very few of them need to undergo any treatments to enhance their natural hues. The depth and layering of their colours and facets make for beautiful trinkets that stand out in a way that most other stones don’t. Purple also goes very well with all colours of metal, so these gems can be used in almost every kind of jewellery setting. While white metals like silver, white gold and platinum give purple stones a more contemporary look, rose and yellow gold settings make the jewel look more vintage. Either way, they are sure to dazzle! If you’re interested in other coloured gemstones, have a look at the names, meanings and properties of different coloured gems: Footer We believe in being as transparent as possible when it comes to this site. With this in mind, please be aware that we may receive remuneration for some of the products we review on this site. Truly Experiences is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.co.uk (and other Amazon programs). We will also list ads from time to time. You should be able to see these as text links or blocks of ads which have a small notation indicating “Ads by Google” or “AdChoices”. Our mission is to help our visitors, but this is also very clearly a for-profit site and you should realize as much. We include only those products that we believe could benefit you, some of which we may get a commission if you purchase them. However, we also provide links on the site to information resources for which we receive no compensation. If you have any questions whatsoever, please contact us using the "contact" option on the site menu and we will be happy to answer any questions. Trademark Dislosure Amazon and the Amazon logo are trademarks of Amazon.com, Inc, or its affiliates. In addition, any other trademarks and logos we mention on this site are also the property of their respective owners.
In the Middle Ages, the gem was believed to protect those close to you from harm. Sapphire is a protection stone and also symbolises loyalty and trust. Special occasions: Purple Jasper Stone Mainly known for being a blue gemstone, jasper is found in various shades of purple too. What makes jasper stand out from other gemstones is the fact that it has a unique matrix of patterns and veins. Purple jasper is a member of the chalcedony family of gemstones and is barely faceted. Purple jasper is a result of blue and red jasper blending together, which is why it’s such a warm shade. This precious stone is commonly used in statement pieces and fine jewellery as its colour is so impactful. It’s widely known to be used in jewellery owned by some of the world’s elite royals. If well cared for, purple jasper can last for several decades without losing its brilliance or colour. Purple jasper meaning & properties In terms of gemology, purple jasper represents royalty, honour, dignity, status and power, given the stone’s frequent use by royal jewellers. Purple jasper is also known as ‘the stone of bonding.’ It is thought to be able to take multiple energies at once and unite them. Crystal and spiritual healers believe that purple jasper reduces contradictions and makes way for spirituality, allowing the stone’s holder to bond better with their loved ones. Purple Tourmaline Tourmaline is a fairly popular gemstone, but, purple tourmaline is not easily found. While many people aren’t even aware of the existence of purple tourmaline, these stones are renowned by gemologists for their brilliance. They have the ability to absorb light in different ways from different angles, a phenomenon known as pleochroism. Jewellers tend to facet purple tourmaline as a means of adding to their pleochroism.
yes
Real Estate
Can One Make Quick Profit Flipping Houses?
yes_statement
one can make "quick" "profit" "flipping" "houses".. it is possible to make "quick" "profit" by "flipping" "houses".
https://www.investopedia.com/articles/mortgages-real-estate/08/house-flip.asp
Flipping Houses: How It Works, Where to Start, and 5 Mistakes to ...
Flipping Houses: How It Works, Where to Start, and 5 Mistakes to Avoid James McWhinney is a long-tenured Investopedia contributor and an expert on personal finance and investing. With over 25 years of experience as a full-time communications professional, James writes about finance, food, and travel for a variety of publications and websites. He received his double major Bachelor of Arts in professional and creative writing from Carnegie Mellon University and his Master of Journalism at Temple University. Chip Stapleton is a Series 7 and Series 66 license holder, CFA Level 1 exam holder, and currently holds a Life, Accident, and Health License in Indiana. He has 8 years experience in finance, from financial planning and wealth management to corporate finance and FP&A. Vikki Velasquez is a researcher and writer who has managed, coordinated, and directed various community and nonprofit organizations. She has conducted in-depth research on social and economic issues and has also revised and edited educational materials for the Greater Richmond area. The road to real estate riches isn’t all about curb appeal and sold signs. Far too many would-be real estate moguls overlook the basics and end up failing—and this includes flippers. These are individuals who purchase and renovate properties before putting them back on the market to make a profit. If you're going to flip a home, make sure you have the cash, time, skills, knowledge, and patience before you lose out. But how do you avoid these mistakes? Key Takeaways Flipping is a real estate strategy that involves buying homes, renovating them, and selling them for a profit in a short period of time. Flipping houses is a business that requires knowledge, planning, and savvy to be successful. Common mistakes made by novice real estate investors are underestimating the time or money that the project will require. Another error that house flippers make is overestimating their skills and knowledge. Patience and good judgment are especially important in a timing-based business like real estate investing. Top 5 Must-Haves For Flipping Houses How Flipping Houses Works Flipping is a real estate investment strategy where an investor purchases a property with the intention of selling it for a profit rather than using it. Investors who flip properties concentrate on the purchase and subsequent resale of one or a group of properties. Many investors attempt to generate a steady flow of income by engaging in frequent flips. So how do you flip a building or house? The key is to buy low and sell high. But rather than adopt a buy-and-hold strategy, it's important to complete the transaction as quickly as possible. This limits the time that your capital is at risk. In general, the focus should be on speed as opposed to maximum profit. That’s because each day costs you more money in mortgage, utilities, property taxes, insurance, and other costs associated with homeownership. But the flipping plan often comes with several pitfalls. Any profit you make is typically derived from price appreciation that results from a hot real estate market in which prices are rising rapidly or from capital improvements made to the property—or both. For example, an investor might purchase a fixer-upper in a hot neighborhood, make substantial renovations, then offer it at a price that reflects its new appearance and amenities. Where to Start Limit your financial risk and maximize your return potential. This means you shouldn't pay too much for a home. And make sure you also know how much the necessary repairs or upgrades will cost before you buy. You can then figure out an ideal purchase price once you have this information. There is a rule called the 70% rule. It states that an investor should pay no more than 70% of the after-repair value of a property less any repairs that are needed. The ARV is what a home is worth after it is fully repaired. Here's how it works: If a home’s ARV is $150,000 and it needs $25,000 in repairs, then the 70% rule means that an investor should pay no more than $80,000 for the home: $150,000 × 0.70 = $105,000 - $25,000 = $80,000. Like any other small business, flipping requires time and money, planning and patience, skill, and effort. It will likely wind up being harder and more expensive than you ever imagined. Take it lightly at your peril: If you’re just looking to get rich quickly by flipping a home, you could end up in the poorhouse. Below are the five mistakes to avoid if you are thinking about flipping a house. Even if you get every detail right, changing market conditions could mean that every assumption you made at the beginning will be invalid by the end. 1. Not Enough Money Dabbling in real estate is expensive. The first expense is the property acquisition cost. While low/no-money-down financing claims abound, finding these deals from a legitimate vendor is easier said than done. And if you’re financing the acquisition, you’re going to pay interest. Consider this: The interest on borrowed money is tax deductible even after the passage of the Tax Cuts and Jobs Act (TCJA), but it is not a 100% deduction. Every dollar spent on interest adds to the amount you’ll need to earn on the sale just to break even. Research your financing options to determine the best product for your needs and to find the right lender. Consider using a mortgage calculator to compare rates that various lenders offer. Paying cash certainly eliminates the cost of interest, but even then, there are holding costs and opportunity costs for tying up your cash. Even if you manage to overcome the financial hurdles of flipping a house, don’t forget about capital gains taxes, which will chip away at your profit. Making a profit is tougher than before and they are dropping. Flippers grossed about $67,900 per property across the country in 2022 or a return on investment (ROI) of 26.9%. That's a 3% decrease from 2021 when flippers earned about $70,000 per property. This doesn't mean you can't make money. it's just that you'll need more care. Renovation and other costs (real estate taxes, utilities, and other carrying costs) can cut your profit by around two-thirds. Add to that an unexpected structural problem with the property, and a gross profit can become a net loss. So if you plan to fix and sell a house for a profit, the sale price must exceed the cost of acquisition, renovation costs, and holding costs combined. And remember: timing is everything, especially in real estate. 2. Not Enough Time Flipping houses is time-consuming. It can take months to find the right property. Once you own the house, you’ll need time to renovate. This means you'll have to give up personal time on demolition and construction if you have a day job. If you pay someone to do the work for you, you’ll spend more time than you expect supervising the activity, and the costs of paying others will reduce your profit. Once the work is done, you’ll need to schedule inspections to make sure that the property complies with applicable building codes before you can sell it. If it doesn’t, you’ll need to spend more time and money to bring it up to par. Selling the property also requires a great deal of time. If you show it to prospective buyers yourself, you may spend plenty of time commuting to and from the property and in meetings. If you use a real estate agent, you will owe a commission. For many people, it might make more sense to stick with a day job, where they can earn the same kind of money in a few weeks or months via a steady paycheck, with no risk and a consistent time commitment. Flipped homes accounted for 8.4% of all home sales in the United States in 2022. This is the highest percentage of flipped homes that were on the market since 2005, according to data published by ATTOM Data Solutions. 3. Not Enough Skills Professional builders and skilled professionals, such as carpenters and plumbers, often flip houses as a side income to their regular jobs. They have the knowledge, skills, and experience to find and fix a house. Some of them also have union jobs that may provide unemployment checks all winter long while they work on their side projects. The real money in house flipping comes from sweat equity. If you’re handy with a hammer, enjoy laying carpet, and can hang drywall, roof a house, and install a kitchen sink, then you have the skills to flip a house. But if you don’t know a Phillips-head screwdriver from a flat one, you will need to pay a professional to do the renovations and repairs. And that will reduce the odds of making a substantial profit on your investment. Flipping is also called wholesale real estate investing, 4. Not Enough Knowledge You must know how to pick the right property, in the right location, at the right price. In a neighborhood of $100,000 homes, do you really expect to buy at $60,000 and sell at $200,000? The housing market is far too efficient for that to occur regularly. Even if you get the deal of a lifetime like snapping up a house in foreclosure for a song, knowing which renovations to make and which to skip is key. You also need to understand the applicable tax laws and zoning laws and know when to cut your losses and get out before your project becomes a money pit. Big-league lenders have also started to seek profits in the flip-loan marketplace, with global investment firm KKR joining other private investment firms seeking a piece of the action. 5. Not Enough Patience Professionals take their time and wait for the right property. Novices rush out to buy the first house that they see. Then they hire the first contractor who makes a bid to address work that they can’t do themselves. Professionals either do the work themselves or rely on a network of prearranged, reliable contractors. Novices hire real estate agents to help sell the house. Professionals rely on for-sale by owner efforts to minimize costs and maximize profits. Novices expect to rush through the process, slap on a coat of paint, and earn a fortune. Professionals understand that buying and selling houses takes time and that the profit margins are sometimes slim. Do I Need to Have a Cash Offer to Flip a House? No. Cash can be more attractive to sellers, so you may see more cash offers accepted on home-flipping shows. Nationwide, 62.7% of house flips are purchased with cash. However, many people do finance their house flips. It all depends on the situation. Which Cities Are the Best to Flip a House? This depends a lot on what you're looking for and your bankroll. But according to New Silver, which provides capital to real estate investors, the best cities for house flipping are Jacksonville, Atlanta, El Paso, Charlotte (North Carolina), and Hartford (Connecticut). How Long Does It Take to Flip a House? The average length of time it takes to flip a house is about four to six months from the purchase date to the selling of the finished home. Keep in mind, though, that each project is different. In some cases, it may take a month or so but others may require heavier work. The Bottom Line It looks so easy! At any given time, a half-dozen shows on television feature good-looking, well-dressed investors who make the flipping process look fast, fun, and profitable. But making a nice profit quickly by flipping a home is not as easy as it looks on TV. Novice flippers can underestimate the time or money required and overestimate their skills and knowledge. If you are thinking about flipping a house, make sure you understand what it takes and the risks involved. Article Sources Investopedia requires writers to use primary sources to support their work. These include white papers, government data, original reporting, and interviews with industry experts. We also reference original research from other reputable publishers where appropriate. You can learn more about the standards we follow in producing accurate, unbiased content in our editorial policy. The offers that appear in this table are from partnerships from which Investopedia receives compensation. This compensation may impact how and where listings appear. Investopedia does not include all offers available in the marketplace.
But if you don’t know a Phillips-head screwdriver from a flat one, you will need to pay a professional to do the renovations and repairs. And that will reduce the odds of making a substantial profit on your investment. Flipping is also called wholesale real estate investing, 4. Not Enough Knowledge You must know how to pick the right property, in the right location, at the right price. In a neighborhood of $100,000 homes, do you really expect to buy at $60,000 and sell at $200,000? The housing market is far too efficient for that to occur regularly. Even if you get the deal of a lifetime like snapping up a house in foreclosure for a song, knowing which renovations to make and which to skip is key. You also need to understand the applicable tax laws and zoning laws and know when to cut your losses and get out before your project becomes a money pit. Big-league lenders have also started to seek profits in the flip-loan marketplace, with global investment firm KKR joining other private investment firms seeking a piece of the action. 5. Not Enough Patience Professionals take their time and wait for the right property. Novices rush out to buy the first house that they see. Then they hire the first contractor who makes a bid to address work that they can’t do themselves. Professionals either do the work themselves or rely on a network of prearranged, reliable contractors. Novices hire real estate agents to help sell the house. Professionals rely on for-sale by owner efforts to minimize costs and maximize profits. Novices expect to rush through the process, slap on a coat of paint, and earn a fortune. Professionals understand that buying and selling houses takes time and that the profit margins are sometimes slim. Do I Need to Have a Cash Offer to Flip a House? No.
no
Real Estate
Can One Make Quick Profit Flipping Houses?
yes_statement
one can make "quick" "profit" "flipping" "houses".. it is possible to make "quick" "profit" by "flipping" "houses".
https://www.investopedia.com/articles/mortgages-real-estate/08/flipping-flip-properties.asp
Flipping Houses: Is it Better than Buy-and-Hold?
Chip Stapleton is a Series 7 and Series 66 license holder, CFA Level 1 exam holder, and currently holds a Life, Accident, and Health License in Indiana. He has 8 years experience in finance, from financial planning and wealth management to corporate finance and FP&A. The question of whether flipping or buying and holding real estate is the best strategy for investing in property doesn't have one correct answer. Instead, choosing one method over the other should be part of a clear strategic plan that considers your overall goals. You should also take into consideration the opportunities presented by the existing market. Here is a look at what is involved in pursuing each strategy and how to decide which one might be right for you. Key Takeaways Flipping properties and buying and holding real estate represent two different investment strategies. Owning real estate offers investors the opportunity to accumulate wealth over time and avoid the stock market's ups and downs. Flipping can provide a quick turnaround on your investment and avoids the ongoing hassles of finding tenants and maintaining a property, but costs and taxes can be high. Buy-and-hold properties provide passive monthly income and tax advantages, but not everyone is prepared for the management and legal responsibilities of being a landlord. Why Invest in Real Estate? That's a good question. Residential real estate ownership is gaining ever-increasing interest from retail investors for many of the following reasons: Real estate can provide more predictable returns than stocks and bonds. Real estate provides an inflation hedge because rental rates and investment cash flow usually rise by at least as much as the inflation rate. Real estate provides an excellent place for capital in times when you're unsure of the prospects for stocks and bonds. The equity created in a real estate investment provides an excellent base for financing other investment opportunities. Instead of borrowing to get the capital to invest (i.e., buying stocks on margin), investors can borrow against their equity to finance other projects. The tax-deductibility of mortgage interest makes borrowing against a home attractive. In addition to providing cash flow for owners, residential real estate can also be used for a home or other purposes. Passive vs. Active Income One key distinction between buying and holding and flipping properties is that the former can provide you with passive income, while the latter offers active income. Passive income is money that is earned on investments that continues to make money without any material participation on your part. It could be from stocks and bonds or from owning rental property and receiving rental income each month, provided you hire a management company to do all the required tasks, such as finding tenants, collecting rent, and taking care of maintenance. Active income is money that you earn in exchange for the work that you perform. That includes your salary from work, as well as the profits you make flipping houses. Flipping is considered active income, regardless of whether you are doing the physical labor of stripping floors. It is still a business that you engage in—finding a property to flip, purchasing it, obtaining insurance, overseeing contractors, managing the project, and more. In this sense, flipping isn't just an investment strategy like buying and holding stocks or real estate. If you have a day job, keep in mind that your spare time will likely be taken up with all of the demands that flipping a property entails. Two Ways to Flip Properties Two major types of properties can be used in a buy/sell approach to real estate investing. The first is houses or apartments that can be purchased below current market value because they are in financial distress. The second is the fixer-upper, a property with structural, design, or condition issues that can be overcome to create value. Investors who focus on distressed properties do so by identifying homeowners who can no longer manage or sustain their properties or by finding properties that are overleveraged and are at risk of going into default. On the other hand, those who prefer fixer-uppers will remodel or enhance a property so that it works better for homeowners or is more efficient for apartment tenants. The buyer of a fixer-upper using this tactic relies on invested labor to increase values instead of just buying a property at a low cost to create high investment returns. Of course, it is possible to combine these two strategies when flipping properties, and many people do just that. However, consistently finding these opportunities can be challenging in the long run. For most people, flipping properties should be considered more of a tactical strategy than a long-term investment plan. The Pros and Cons of Flipping Pros Faster return on your money Potentially safer investment Cons Costs Taxes Pro: A Faster Return on Your Money One big advantage of flipping properties is realizing gains quickly, which releases capital for other purposes. The average time to flip a house is about six months, though first-timers should expect the process to take longer. Pro: A Potentially Safer Investment Unlike the stock market, which can turn in the middle of a day, real estate markets are often more predictable. In a sense, flipping properties could be considered a safer investment strategy because it is intended to keep capital at risk for a minimal amount of time. It also lacks the management and leasing risks inherent in holding real estate—not to mention the hassles of finding tenants, collecting rents, and maintaining a property. Con: Costs Flipping houses can create cost issues that you don't face with long-term investments. The expenses involved in flipping can demand a lot of money, leading to cash flow problems. Because transaction costs are very high on both the buy and sell sides, they can significantly affect profits. If you are giving up your day job and relying on flipping for your income, you're also giving up a consistent paycheck. Con: Taxes The quick turnaround in properties (and speed is everything in successful flipping deals) can create swings in income that can boost your tax bill. That is especially true if things move too fast to take advantage of long-term capital gains tax rules. In those cases, you'll have to pay a higher capital gains tax rate based on your earned income if you own a property for less than a year. The Pros and Cons of Buy-and-Hold Pros Ongoing income Increase in property values Taxes Cons Vacancy costs Management and legal issues Pro: Ongoing Income Owning rental property provides you with regular income, no matter where you are or what you are doing. What's more, buying and holding real estate is a known recipe for amassing great wealth. A lot of "old money" in the U.S. and abroad was accumulated through land ownership. Despite periods of decreasing prices, land values have almost always rebounded in the long run because there is a limited supply of land. Pro: Increase in Property Values The longer you hold your investment property, the more likely you are to benefit from inflation. That will boost the property's value while the amount you borrowed for the mortgage goes down as you pay it off. Suppose you were able to purchase during a buyer's market and sell during a seller's market. Then, there's also real potential for a significant return on your investment. Pro: Taxes Owning a rental property has tax advantages not available to flippers. Rental property is taxed as investment income, with lower tax rates. You can also write off expenses, including repairs, maintenance or upkeep, paying a property manager, and driving to or from your property. Furthermore, you'll pay taxes at the long-term capital gains rate should you decide to sell after owning the property for more than a year. Con: Vacancy Costs Being unable to find tenants is one of the risks of owning rental property. That is true whether you do it yourself or hire a management company to do it for you. If your property sits empty for months or years, you are responsible for covering the mortgage during that period. Before investing in a buy-and-hold property, you'll want to make sure your budget will cover one to three months of vacancy per year. Con: Management and Legal Issues Long-term real estate ownership is a management-intensive endeavor that is outside the skill set of many investors. Some investors, especially first-time rental property owners, are ill-prepared or ill-equipped to deal with the responsibilities and legal issues that come with being a landlord. The process of finding quality tenants and meeting their needs can be a stressful and time-intensive undertaking. However, successful property management is necessary to ensure ongoing cash flows from one's investment. Choosing a Strategy You need to answer a few critical questions to decide whether flipping properties or holding them long-term is the best strategy. You must decide whether your capital allocation to real estate is a permanent investment or just a way to profit from an expected rise in home prices. It would also help if you determined what risk and return ratio is appropriate for this portion of your investment portfolio. Finally, you must have the risk tolerance and skills to take on the management responsibilities that go along with either type of investment. Suppose the capital is not available to purchase a diversified portfolio. In that case, a prospective investor must be prepared to take on unsystematic risk. That includes individual property risks and potential lack of demand for the property, whether by homeowners or renters. If you're considering a buy-and-sell strategy, you must also determine whether you have the skill to uncover distressed sale properties or fixer-uppers. In this transactional strategy, it's essential to figure out whether capital can be turned enough times within a given investment period to overcome the transaction costs. They include brokerage, financing, and closing fees. You can enjoy both strategies' benefits by developing a business flipping houses and using your profits to invest in long-term rental income properties. The Bottom Line The choice between the two strategies in question depends on your particular financial situation and goals. Nonetheless, the long-term holding strategy is generally more appropriate for those who use real estate as a core portion of their overall investment portfolios. On the other hand, flipping properties is usually better when real estate is used as an adjunct or a return-enhancement tactic. Investors wishing to amass wealth and derive income from their real estate investments should consider holding real estate for the long term. They can use the equity built into the portfolio to finance other investment opportunities, with the potential of eventually selling the properties in an up-market. Flipping properties is a tactic that is best suited for periods when prospects in the stock and bond markets are low. It can also work for people trying to realize short-term capital gains for as long as the housing market allows. Article Sources Investopedia requires writers to use primary sources to support their work. These include white papers, government data, original reporting, and interviews with industry experts. We also reference original research from other reputable publishers where appropriate. You can learn more about the standards we follow in producing accurate, unbiased content in our editorial policy. The offers that appear in this table are from partnerships from which Investopedia receives compensation. This compensation may impact how and where listings appear. Investopedia does not include all offers available in the marketplace.
However, consistently finding these opportunities can be challenging in the long run. For most people, flipping properties should be considered more of a tactical strategy than a long-term investment plan. The Pros and Cons of Flipping Pros Faster return on your money Potentially safer investment Cons Costs Taxes Pro: A Faster Return on Your Money One big advantage of flipping properties is realizing gains quickly, which releases capital for other purposes. The average time to flip a house is about six months, though first-timers should expect the process to take longer. Pro: A Potentially Safer Investment Unlike the stock market, which can turn in the middle of a day, real estate markets are often more predictable. In a sense, flipping properties could be considered a safer investment strategy because it is intended to keep capital at risk for a minimal amount of time. It also lacks the management and leasing risks inherent in holding real estate—not to mention the hassles of finding tenants, collecting rents, and maintaining a property. Con: Costs Flipping houses can create cost issues that you don't face with long-term investments. The expenses involved in flipping can demand a lot of money, leading to cash flow problems. Because transaction costs are very high on both the buy and sell sides, they can significantly affect profits. If you are giving up your day job and relying on flipping for your income, you're also giving up a consistent paycheck. Con: Taxes The quick turnaround in properties (and speed is everything in successful flipping deals) can create swings in income that can boost your tax bill. That is especially true if things move too fast to take advantage of long-term capital gains tax rules. In those cases, you'll have to pay a higher capital gains tax rate based on your earned income if you own a property for less than a year.
yes
Real Estate
Can One Make Quick Profit Flipping Houses?
yes_statement
one can make "quick" "profit" "flipping" "houses".. it is possible to make "quick" "profit" by "flipping" "houses".
https://www.homelight.com/blog/how-much-can-you-make-flipping-houses/
How Much Can You Make Flipping Houses? The Answer May ...
How Much Can You Make Flipping Houses? The Answer May Surprise You Former art and design instructor Christine Bartsch holds an MFA in creative writing from Spalding University. Launching her writing career in 2007, Christine has crafted interior design content for companies including USA Today and Houzz. At HomeLight, our vision is a world where every real estate transaction is simple, certain, and satisfying. Therefore, we promote strict editorial integrity in each of our posts. Table of Contents “Flipping can be a great way to earn quick cash,” advises Dustin Parker, a top-selling real estate agent in Seaford, Delaware, who’s also a house flipper himself. “However, you run the risk of purchasing a property at a high price point right before the market takes a downturn. If home values fall while you’re doing the renovations, then you’re stuck with a lot of money invested in a house that you can’t sell at a profit.” So, how much can you make flipping houses? Here we’ll cover: How much you can make on a single flip The average earnings for a house flipper House flipper success rates All the costs you need to budget for on each flip How to get financing for your investment property Then, we’ve got 5 tips from experienced investors on how to avoid losing money on your first few house flips. How much can I make on a single flip? In the third quarter of 2019, flippers averaged a 40.6% ROI or a gross profit of $64,900 per flip, according to leading property data firm ATTOM Data Solutions. In this case, ROI is calculated by dividing the gross flipping profit ($64,900) by the purchase price (a median $160,000). To be considered a flip by ATTOM’s standards, a property has to be bought and sold within a 12 month span. It’s important to note that the gross profit figure is the difference between what a property originally cost and what it sold for. In ATTOM’s methodology you’ll see that this number does not include the cost of rehab and renovations, which flipping veterans estimate will run between 20%-33% of the home’s value after repairs. So let’s see how much you’d make with a hypothetical flip house based on these gross average returns while also accounting for your expenses. You buy a house for the median price of $160,000 with the intention of flipping it. Based on the current averages, your gross profit would amount to $64,900 (or 40.6% ROI) for a sale price of $224,900. Your average cost of renovations as 20%-33% of the after repair value (in this case $224,900) amount to: $44,980-$74,217. “It’s always our goal is to make about 20% profit margins for the investors that we work with on flips, which is pretty standard for our area,” says Parker. “While we target 20%, sometimes you fall a little short. I would say the average margin for a flip is 15%. However, it’s possible you’ll hit a home run and get 50% or 60% on one flip alone.” What are the average earnings for a flipper? How much you can earn overall as a flipper depends on a lot of factors, including whether you’re able to identify and purchase discounted properties, hit your targeted budget for rehab and repairs, and how many houses you flip each year. According to one experienced home flipper and blogger, full-time house flippers may flip anywhere from 1-20 houses per year, but looking past those extremes, 2-7 houses per year is more realistic range to work with. Let’s say you flip two houses a year at the median price point, and make $19,920 per flip, at a 12% ROI, after renovations and costs incurred per the example above. That’s only $39,840 per year, and that’s at the very low end of the rehabbing cost spectrum. However, if you’re able to do all 7 flips that year, you’d rake in $139,440. Experienced flippers are able to maximize their profit margins for a number of reasons. For starters, they can afford to buy materials in bulk for multiple houses at once. Plus, as steady customers, they’re able to negotiate better deals with vendors and contractors. Not to mention the fact that they develop relationships with investor-friendly agents and other investors who tip them to great buys before they hit the market. Maybe you’re thinking you’ll just flip a few more houses every year to increase your profits. Keep in mind though It takes an average of 177 days to flip a house—that’s almost 6 months during which your capital is tied up, with no guarantee on what kind of return you’ll see. How much you’ll average as a house flipper also depends on where you’re buying and selling your flips. Some markets are more profitable than others. Looking at individual cities from ATTOM’s report, a handful saw unbelievable gross flipping margins in Q3 2019: Pittsburgh (133%); Flint, Michigan (111%); Cleveland, OH (110%); and Hickory-Lenoir-Morganton, NC (110%). In these markets, investors are more than doubling their money on flips (again, not accounting for the cost of repairs). Meanwhile the cities with the smallest gross flipping profits include Raleigh, NC ($25,000), Austin, TX ($27,549), and Phoenix, AX ($31,135), where you can see how your profit margins would be razor thin. The popularity of flipping in your area makes a difference, too. As investors flock to the next opportunity, competition gets stiff for profitable properties. So, while it is possible to make some serious cash as a house flipper, many who attempt it won’t, because flipping is such a high cost, high risk investment due to the many variables involved. What’s the home flipper success rate? As flipping is a high-risk investment, it’s not at all surprising that for every flipper who made an impressive profit, there were plenty who didn’t. There’s not a lot of hard data on how many flippers only manage to break even or actually lose money on a flip—probably because people aren’t too keen on publicly proclaiming their failures. “The success rate for flippers is probably pretty low because it’s become a really competitive market thanks to people watching house flipping reality tv shows,” advises Parker. “The difference between successful and unsuccessful flippers is treating it like a full-time job and not a hobby. Those hobby flippers who get burned on their first flip probably won’t try it again, which is typically what we see.” Source: (Nastuh Abootalebi/ Unsplash) What costs do I need to budget for on each flip? Flippers who want to make serious cash need to become frugal budgeters. Your biggest expense is the purchase of the property itself—and you need an as-is property that’s in good enough shape to fix it up without spending too much money. Once you find the right property, priority one is making sure you don’t pay too much. Figuring that out requires estimating the after repair value (ARV). You find this by averaging the sold prices of nearby good-condition comps (with similar lot size, square footage, number of rooms, etc.) to determine how much your as-is property will sell for once it’s fixed up. Once you have that ARV, and an estimate on how much it’ll cost to fix the house up, you’ll know how much you can pay for the property itself and still hit your ROI goals. But you can’t just price out the cost of a bucket of paint and new flooring to determine your expenses. Flipping requires juggling and budgeting a lot of factors that you may not even think of as a first timer: Add it all up, and that’s a lot of money to have tied up in a property for six months. And even after all that, you still need a sizable chunk of capital held in reserve for any unexpected expenses, say if you find termites in the house, or the ancient HVAC goes kaput. “Even experts run into unexpected, unpleasant and expensive surprises. We once helped a client that does about 50 flips a year as a full-time job get a really good deal on a foreclosure property,” advises Parker. “He found more issues than expected during the renovations, so his costs far exceeded his budget. The property did eventually sell, but he ended up only making a 2% return on that house when he’d expected to make 20%. It was a lesson learned to budget a good sum more than you’re expecting to need.” Bottom line is some flips are quick and easy and all you’ll need to do is slap on some paint and install new flooring. But you also have to be prepared in the event you need to jack the entire house up and replace foundation, the roof, and everything else. How can I get financing for flipping? “Cash is king. In today’s market, there’s so much competition for flips that we’re finding as-is sellers aren’t entertaining too many financed offers—especially if it’s a foreclosure or a bank-owned property. They’re looking for cash,” advises Parker. “Plus, you’re not going to be able to finance a flip traditionally because the property is going to have problems. Most government-backed mortgages, like FHA, VA and USDA will not support the purchase of any property that’s not move-in ready.” However, just because most pro flippers don’t finance, doesn’t mean it isn’t available. “There are some ways to borrow money to flip. A conventional loan may be an option,” suggests Parker. “There are also exotic loans like the FHA 203K, which is essentially a construction loan to finance a flip—but that’s a difficult and time-consuming process for both the lender and the contractor you select to do the renovations.” Both of these loan types have pros, cons, and conditions that could hamper your flipping plans, so go over the fine print with your lender before signing on the dotted line. If traditional lenders are a no-go, you can also seek out a hard money loan. In just about every market you’ll find investors who have money that they’re willing and interested to invest into flips—they just don’t want to do the work themselves. The downside is that the interest rates on hard money loans are typically high—from 10% up to 18% or more. So you need to complete the flip as quickly as possible so you don’t incur those high interest rates for too long. Plus, they’re typically for a shorter time frame, such as 12 months to five years—which makes the monthly mortgage payments higher, and can make it difficult to hold onto the property if it doesn’t sell right away at the right price. “Don’t overdo it on the renovations or you’ll significantly cut into your margins,” advises Parker. “Newbies who love to watch the flipper TV shows want to put in expensive granite countertops and hardwood floors. That may look beautiful, but it’s also very expensive. Buyers aren’t looking to move into the Ritz Carlton. They’re looking for something that’s nice, affordable, and move-in ready.” 2. Account for closing costs when calculating your margins Newbie flippers often forget to budget enough for closing costs. “Let’s say you purchase a house for $100,000, and you need to spend about $50,000 in renovations. At that point you’ve got $150,000 into the property,” says Parker. “If your goal is to get a 20% return, or a profit of $30,000, on that $150,000 investment, you can’t just sell for $180,000 and think you’ll hit your margins. You’ve got to account for closing costs, association fees, agent commission, and so forth.” Experts recommend setting aside 2% to 5% of the home’s value to cover closing costs. If your property’s ARV is $200,000, you need to set aside $4,000 to $10,000 for closing costs on one transaction (buying or selling). So if you want to hit your $30,000 profit margin, you’ll need to sell that property for closer to $190,000 or $200,000 to cover those closing costs times two. 3. Surround yourself with an experienced team No matter how much you read up on successful flipping strategies, there is no substitution for actual experience. The only way to get that when you’re a newbie is to partner with people who’ve done it before. “It’s vital to have a good team of professionals to help you safeguard against making big financial mistakes. Assemble one that includes an agent, an attorney, several lenders, and multiple contractors,” says West. The first teammate to enlist is a real estate agent with flipping experience. They’ll know the best neighborhoods for flipping, have a line on bargain properties, and keep a digital rolodex of other flipping experts to connect you with, like investors, contractors, and even lenders. “In Delaware we have at least three or four investment groups that have monthly meetings to share ideas, share contractors, and share strategies. They’ll also bring in speakers to share information on successful flipping,” says Parker. You can find these networking and investor education groups through online searches, on social media platforms like Facebook, or by joining professional organizations, like the Real Estate Investment Association. 5. Find properties to flip before they’re listed With more flippers competing in the market, bidding wars drive the prices for properties with profit potential higher and higher. As home prices rise, profit margins become razor thin. Eventually, even the most rundown houses become so overpriced that it’s no longer affordable to fix and flip them. The best way to combat rising prices to find properties to buy before they hit the market. “The hardest part of the job for a flipper is to find that good buy, because if it’s a good deal and it hits the open market, every other investor in that area will be after it and that’ll drive that price up,” advises Parker. “Our best flips with the highest returns have typically been found off-market. You can find sellers off market through mailings, advertising, cold calling, door knocking, and referrals.” Asking your flip-experienced agent to keep an eye out for good deals on off-market properties is essential. You may also luck out in your investment club networking and meet a real estate wholesaler who makes money by pounding the pavement to dig up pre-market listing tips for a modest finder’s fee. Source: (Avery Evans/ Unsplash) Never lose money flipping houses There’s no way to guarantee how much money you’ll make as a house flipper, or even if you’ll turn a profit at all. However, if it looks like you’re going to break even or lose money on a flip, there is one thing you can do to salvage your investment: “Here’s the nice thing about flipping—if it doesn’t make sense to sell the property at the moment, then you can just rent it out for a couple of years,” advises Parker. “That way you’ll recoup some of your expenses through the rental income, and you can always sell when the market’s improved.” Former art and design instructor Christine Bartsch holds an MFA in creative writing from Spalding University. Launching her writing career in 2007, Christine has crafted interior design content for companies including USA Today and Houzz.
How Much Can You Make Flipping Houses? The Answer May Surprise You Former art and design instructor Christine Bartsch holds an MFA in creative writing from Spalding University. Launching her writing career in 2007, Christine has crafted interior design content for companies including USA Today and Houzz. At HomeLight, our vision is a world where every real estate transaction is simple, certain, and satisfying. Therefore, we promote strict editorial integrity in each of our posts. Table of Contents “Flipping can be a great way to earn quick cash,” advises Dustin Parker, a top-selling real estate agent in Seaford, Delaware, who’s also a house flipper himself. “However, you run the risk of purchasing a property at a high price point right before the market takes a downturn. If home values fall while you’re doing the renovations, then you’re stuck with a lot of money invested in a house that you can’t sell at a profit.” So, how much can you make flipping houses? Here we’ll cover: How much you can make on a single flip The average earnings for a house flipper House flipper success rates All the costs you need to budget for on each flip How to get financing for your investment property Then, we’ve got 5 tips from experienced investors on how to avoid losing money on your first few house flips. How much can I make on a single flip? In the third quarter of 2019, flippers averaged a 40.6% ROI or a gross profit of $64,900 per flip, according to leading property data firm ATTOM Data Solutions. In this case, ROI is calculated by dividing the gross flipping profit ($64,900) by the purchase price (a median $160,000). To be considered a flip by ATTOM’s standards, a property has to be bought and sold within a 12 month span. It’s important to note that the gross profit figure is the difference between what a property originally cost and what it sold for.
yes
Real Estate
Can One Make Quick Profit Flipping Houses?
yes_statement
one can make "quick" "profit" "flipping" "houses".. it is possible to make "quick" "profit" by "flipping" "houses".
https://www.forbes.com/sites/forbesrealestatecouncil/2020/02/25/three-ways-to-flip-houses-with-no-money/
Three Ways To Flip Houses With No Money
Flipping houses is a lucrative business for many full-time flippers. It also provides considerable side income for part-time house flippers. If you watch HGTV on any given day, it is likely that you will come across several shows where property investors take dilapidated homes, which are eyesores, and then convert them into jaw-dropping and chic abodes. Not only that, but they also manage to make a profit after some major renovations.​ This is the world of house flipping. What Is House Flipping? Flipping is a quick-profit strategy where an investor buys real estate at a discounted price and then improves the property to offload it at a better price. Rather than buying a property to live in, you are purchasing a home as a real estate investment. It is worth mentioning that the main goal of flipping is to purchase low and sell high. Flipping houses can be an extremely lucrative strategy, especially if the real estate market is performing well. Note that foreclosures and old homes are popular properties to use in house flipping. This is because most real estate investors can buy these properties fairly cheaply, improving their potential profit. Can real estate investors actually flip houses without any money down? The answer is yes. If you want to flip a property but don't have enough money for a down payment, don't worry. There are options that will allow you to easily enter the house-flipping market. Here are three great options to help you flip homes with no money. 1. Hard Money Lenders If you are not content with parting with a significant amount of money upfront to buy real estate, then a hard money loan can be the answer. Hard money lenders are people who lend money to others at a high interest rate and often charge points on top of that. Hard money lenders will usually let you borrow comparatively more than conventional banks and other financial institutions. A hard money loan is one of the best options for individuals who are experienced investors and have one or multiple existing properties. They are also ideal for owner-occupants with substantial equity in their homes and a great credit score. Another great thing is that you can finance all the property repairs with some hard money lenders. Unlike conventional bank loans, your ability to get hard money financing is not determined by your creditworthiness. However, the fees and rates are often higher with hard money loans. Note that the interest rates may range from 8-15%, and the points range from one to five. You should also keep in mind that a majority of hard money lenders will typically only loan you a certain percentage of the purchase price — usually around 70%. When evaluating various hard money lenders, you should pay close attention to interest rates, fees and loan terms. 2. Private Money Lenders If you have all the technical skills and experience to flip houses, but not the funds, then this option is best for you. Private money lenders are individuals who have the funds and would like to invest in real estate. However, they just do not have the expertise and time, or would rather be on the golf course or beach than swinging mallets. Private lenders have liquid money to spare and are willing to lend you at a predetermined interest rate. Perhaps the most suitable source of finance for no money deals is a private money lender. The money partner or lender can sit back, relax and pay the money, while the other partner will manage the logistics of the real estate project and ensure they complete the house flip quickly and professionally. You can borrow the whole purchase amount and repairs plus some other costs if you manage to find the right private lender. It is worth noting that the amount of money the lender will provide you will depend on the comfort level between you and the private investors, the experience and the real estate deal. 3. Wholesaling Another great option to flip real estate with no money is using real estate wholesaling. Wholesaling homes is an excellent idea for investors who already have a viable flip business. Keep in mind that for property wholesaling to work in your favor, you've got to have an existing and reliable network of real estate investors looking for a few fix-and-flip deals. So, you cannot simply purchase a house and hope for the best. It is vital to have a plan to succeed. Wholesalers often make money based on a specific percentage of the final sale price, which is typically between 5% and 10%. When wholesaling fix-and-flip properties, you are selling the opportunity to buy a house without ever assuming the title. You will make an assignment fee as you are acting as an intermediary. Final Words Flipping homes with no money down often entails being creative, working with other investors and thinking outside the traditional loan box. Your best chances of obtaining funding are private money lenders, real estate wholesaling and hard money lenders.
Flipping houses is a lucrative business for many full-time flippers. It also provides considerable side income for part-time house flippers. If you watch HGTV on any given day, it is likely that you will come across several shows where property investors take dilapidated homes, which are eyesores, and then convert them into jaw-dropping and chic abodes. Not only that, but they also manage to make a profit after some major renovations.​ This is the world of house flipping. What Is House Flipping? Flipping is a quick-profit strategy where an investor buys real estate at a discounted price and then improves the property to offload it at a better price. Rather than buying a property to live in, you are purchasing a home as a real estate investment. It is worth mentioning that the main goal of flipping is to purchase low and sell high. Flipping houses can be an extremely lucrative strategy, especially if the real estate market is performing well. Note that foreclosures and old homes are popular properties to use in house flipping. This is because most real estate investors can buy these properties fairly cheaply, improving their potential profit. Can real estate investors actually flip houses without any money down? The answer is yes. If you want to flip a property but don't have enough money for a down payment, don't worry. There are options that will allow you to easily enter the house-flipping market. Here are three great options to help you flip homes with no money. 1. Hard Money Lenders If you are not content with parting with a significant amount of money upfront to buy real estate, then a hard money loan can be the answer. Hard money lenders are people who lend money to others at a high interest rate and often charge points on top of that. Hard money lenders will usually let you borrow comparatively more than conventional banks and other financial institutions. A hard money loan is one of the best options for individuals who are experienced investors and have one or multiple existing properties. They are also ideal for owner-occupants with substantial equity in their homes and a great credit score.
yes
Real Estate
Can One Make Quick Profit Flipping Houses?
yes_statement
one can make "quick" "profit" "flipping" "houses".. it is possible to make "quick" "profit" by "flipping" "houses".
https://smartasset.com/mortgage/a-beginners-guide-to-flipping-houses-for-profit
Flipping Houses for Profit: A Beginner's Guide - SmartAsset
A Guide to Flipping Houses for Profit Reality shows have made flipping homes quite popular, and there appears to be some merit to it. In fact, according to New Silver, the average net profit for house flipping was $30,000 in March 2022. Further, in the second quarter of 2021, the average gross profit made per home flip in the U.S. amounted to $67,000. In the third quarter of 2021, the average return on investment for house flipping was 32.3%, according to ATTOM. Still, achieving success in flipping homes means understanding some key features of the practice. What follows is an introduction to home flipping keys. If you have questions about real estate investments then you should consider speaking with a financial advisor. Find a Suitable Real Estate Market Even if you buy a reasonably priced home and stay within your renovation budget, that doesn’t mean you’re going to sell for a big profit. Studies show a wide disparity in the profits home flippers earned in different regions. A December 2021 report by Balancing Everything says that the following cities were among the best for flipping a home, in terms of average return on investment (ROI): Pittsburgh, Pa. – ROI of 162.4% Atlantic City, NJ – ROI of 141.6% Memphis, Tenn. – ROI of 132.7% Denver, Colo. – ROI of 109% New Orleans, La. – ROI of 104.2% Of course, these areas may fall beyond your scope. Nonetheless, be sure to take a magnifying glass to home sales and house flipping profits in your location. Maybe you just need to venture an hour or so out of your zone to find a more profitable place to flip a house in. In addition, you should pay close attention to the neighborhood you invest in. What’s the income level and what’s the school district like? How about the crime rate? You can radically boost a dirt-cheap home, but it won’t sell as easily if it sits in a neighborhood with a recent spate of burglaries. Also, be wary of areas where homes are selling at a high rate. This could mean the local economy or neighborhood conditions are pushing people out. Instead, you’re going to want to invest in places with high employment numbers, low crime rates and other signs that the neighborhood is thriving or quickly making its way up. Ultimately, you want to find an area that combines safety and economic growth with the potential for a profitable house flip. Create a Budget for Your House Flip Once you have a sense of your target neighborhood and going prices for houses in it, it’s time to set up a house flipping budget. First, you need to know what you can reasonably pay for a new home. Our home affordability calculator can give you a clear picture. Buying with all cash is the simplest route for home flippers. It cuts out the mortgage application and approval process, as well as makes your offer more attractive to sellers. Plus, you won’t need to make ongoing interest payments for the property as the renovations are underway. Still, some house flippers need financing. According to a report by ATTOM, 40.5% of flipped homes were purchased using financing. Once you nail down the amount you’ll need for the actual house, you should explore the costs of potential projects. Many people drop the ball here by failing to take the housing market into account. For example, if neighborhood prices top out at, say, $100,000, and you pay $50,000 for the house alone, a $35,000 kitchen upgrade is going to eat into your net profit in a serious way. In this instance, you might want to limit the kitchen remodeling to $15,000. When calculating how much you think you can get for a house, aim for the lower end of comparable sales prices. This will give you more wiggle, should your renovations go over budget. Now that you know how much you can and should spend, you’re almost ready to start shopping for a house, and financing if you need it. To maximize your return, you still need to double-check that you’re taking everything into account. There are likely some big factors that may not be on your radar. Costs and Risks of Flipping Houses Home flipping has been popularized by major networks like HGTV, but 30-minute recaps of only successful projects fail to capture the real costs of flipping homes. Let’s start by exploring home improvement costs. Below is a breakdown of the average costs of various home improvement projects with the percent of costs recouped, according to a 2021 report by Remodeling magazine. Keep in mind that these averages are only guides, as prices can vary significantly by location and materials. 2021 National Average Costs of Home Improvement Projects Project Average Cost Resale Value Mid-range bathroom addition $56,946 $30,237 Upscale bathroom addition $103,613 $54,701 Mid-range bathroom remodel $24,424 $14,671 Upscale bathroom remodel $75,692 $41,473 Mid-range kitchen remodel $75,571 $43,634 Upscale kitchen remodel $149,079 $80,284 Metal roofing replacement $46,031 $25,816 Wood deck addition $16,766 $11,038 As you can see, these projects returned, on average, 53% to 72% in cost recouped. So if you’re depending on financing to pay for the renovations, these costs are also going to hurt your bottom line. Be sure to explore all your options, including a home improvement loan, second mortgage and credit to finance your house flip. You want to care that you don’t overextend yourself. Also, you don’t want to make the rookie mistake of thinking you’ll save money by doing a lot of the work yourself, so you spend more on materials. If you’ve never retiled a bathroom before, it may take you longer than a professional would take, and time is money when you’re paying interest for your financing. In the end, it may have been cheaper to hire a professional from the get-go, especially if you have to ask one to redo your work. Of course, you can do light cosmetic upgrades like painting and stripping woodwork. But leave projects involving plumbing, electrical and structural changes to the professionals. That said, don’t just go for the cheapest labor. This is a big investment you’re making and you’re going to need the right talent. So make a thorough search for contractors and read online reviews. And ask your friends and family for any recommendations. You should factor in the size of the home as well. After all, a renovation on a large home will cost more than the same project in a smaller one by virtue of it requiring more materials. It’ll also take more time, which, as mentioned earlier, is valuable if you borrowed money for this investment. 5 Common House Flipping Mistakes There are a lot of mistakes rookie house flippers could make. Some major things to avoid include: Not having enough money: You’ll want to make sure you have the funds needed to get off the ground and do a good job with your project. Not leaving enough time: If your finances require too quick of a turnaround, you won’t be able to do a good job with your flip. Make sure you can handle owning the house long enough to get the work done. Not getting the improvements right: You don’t want to do too much work and leave the home too expensive to sell, but you also need to actually improve the home. Make sure you find the right balance. Not pricing correctly: This covers a few things. You’ll want to make sure you’re getting a good deal on the property you buy, but you also need to put a fair price on your home to make sure you can move it. Not focusing on the sale: No matter how good of a job you do on renovation, you need selling skills. Don’t forget about staging and other selling strategies. Selling the Home You’re Flipping While you’re likely fine buying the house alone, you’ll definitely need a professional to help you sell it. If you don’t have a realtor already, aim to interview a few. You want someone who can give you a thorough analysis of an after-repair value for the home. You also want someone with a great track record of selling properties in your area for top dollar. Finally, only sign on with someone you like and trust. To make sure you’re doing all you can to help sell the house, take a look at our guide on how to sell your house. Bottom Line Flipping houses can be a lucrative business venture if you do it right. But you can run into several pitfalls along the way. To avoid issues, be sure to research different real estate markets and find a thriving neighborhood where you can find a low-cost home that you can reasonably sell for a profit. You should also stick to a budget and keep things small if you’re a beginner. Without a doubt, you should always develop a house flipping budget that’s realistic and covers everything. These should include the purchase price of the home, financing for any loans, labor, materials and professional fees. Try to keep costs down while you renovate, and work with a realtor or financial advisor for professional guidance. Real Estate Investing Tips Flipping houses will affect your cash flow, so planning ahead of time is crucial. Finding a qualified financial advisor doesn’t have to be hard. SmartAsset’s free tool matches you with up to three financial advisors who serve your area, and you can interview your advisor matches at no cost to decide which one is right for you. If you’re ready to find an advisor who can help you achieve your financial goals, get started now. Use SmartAsset’s mortgage comparison tool to compare mortgage rates from top lenders and find the one that best suits your needs. Consider hiring a contractor before you buy. Their fees may cut into profit margins when you’re tackling a home improvement project, but their professional evaluation of houses will likely save you money down the line. Javier Simon, CEPF®Javier Simon is a banking, investing and retirement expert for SmartAsset. The personal finance writer's work has been featured in Investopedia, PLANADVISOR and iGrad. Javier is a Certified Educator in Personal Finance (CEPF) and a member of the Society for Advancing Business Editing and Writing. He has a degree in journalism from SUNY Plattsburgh.Was this content helpful? Yes No SmartAsset Advisors, LLC ("SmartAsset"), a wholly owned subsidiary of Financial Insight Technology, is registered with the U.S. Securities and Exchange Commission as an investment adviser. SmartAsset’s services are limited to referring users to third party registered investment advisers and/or investment adviser representatives (“RIA/IARs”) that have elected to participate in our matching platform based on information gathered from users through our online questionnaire. SmartAsset does not review the ongoing performance of any RIA/IAR, participate in the management of any user’s account by an RIA/IAR or provide advice regarding specific investments. We do not manage client funds or hold custody of assets, we help users connect with relevant financial advisors. This is not an offer to buy or sell any security or interest. All investing involves risk, including loss of principal. Working with an adviser may come with potential downsides such as payment of fees (which will reduce returns). There are no guarantees that working with an adviser will yield positive returns. The existence of a fiduciary duty does not prevent the rise of potential conflicts of interest.
In the end, it may have been cheaper to hire a professional from the get-go, especially if you have to ask one to redo your work. Of course, you can do light cosmetic upgrades like painting and stripping woodwork. But leave projects involving plumbing, electrical and structural changes to the professionals. That said, don’t just go for the cheapest labor. This is a big investment you’re making and you’re going to need the right talent. So make a thorough search for contractors and read online reviews. And ask your friends and family for any recommendations. You should factor in the size of the home as well. After all, a renovation on a large home will cost more than the same project in a smaller one by virtue of it requiring more materials. It’ll also take more time, which, as mentioned earlier, is valuable if you borrowed money for this investment. 5 Common House Flipping Mistakes There are a lot of mistakes rookie house flippers could make. Some major things to avoid include: Not having enough money: You’ll want to make sure you have the funds needed to get off the ground and do a good job with your project. Not leaving enough time: If your finances require too quick of a turnaround, you won’t be able to do a good job with your flip. Make sure you can handle owning the house long enough to get the work done. Not getting the improvements right: You don’t want to do too much work and leave the home too expensive to sell, but you also need to actually improve the home. Make sure you find the right balance. Not pricing correctly: This covers a few things. You’ll want to make sure you’re getting a good deal on the property you buy, but you also need to put a fair price on your home to make sure you can move it. Not focusing on the sale: No matter how good of a job you do on renovation, you need selling skills. Don’t forget about staging and other selling strategies. Selling the Home You’re Flipping While you’re likely fine buying the house alone, you’
no
Real Estate
Can One Make Quick Profit Flipping Houses?
yes_statement
one can make "quick" "profit" "flipping" "houses".. it is possible to make "quick" "profit" by "flipping" "houses".
https://learn.roofstock.com/blog/how-to-start-flipping-houses
How to start flipping houses for a profit in 2022
How to start flipping houses for a profit in 2022 If you’re looking for a way to make quick profits in real estate, house flipping just might be the way to go. To be sure, making money flipping houses isn’t as easy as they make it seem on TV. House flipping requires a lot of hard work, expertise, and patience. But when you know how to do it, flipping houses can be a potentially lucrative short-term real estate investment. What is house flipping? Real estate investors who flip houses buy the property at a discount, make any needed repairs and renovations, and sell the property at a profit. In other words, house flipping follows the classic investment strategy of buying low and selling high. Housing flipping can be a potentially profitable way to invest in real estate when there is more demand for homes than there is supply, as in many real estate markets today. Most homebuyers don’t have the time, energy, money, or knowledge to find deals and do their own repairs. But, they may be more than willing to pay a good price for a home where all of the work has already been done. Popular properties for flipping Not every house is a good candidate for flipping. Popular properties that can be good for flipping usually fall into one of three general categories: The first option is older homes in need of repair. In today’s economy, there are homeowners who simply don’t have the money to make the necessary repairs to get top dollar for their homes. Real estate investors who have the capital can take advantage of opportunities like these. Short-sales are another good option for finding property to flip. Homeowners who have missed several mortgage payments and are in the process of getting foreclosed on sometimes try to sell short – or sell for less than the current mortgage balance – in order to avoid the embarrassment of foreclosure and a bad credit score. REO homes, which means “real estate owned” by the bank, is property that has already been foreclosed on. Because banks aren’t in the real estate investment business, they are often very motivated to sell at a below-market price so that they can get the property off of their balance sheet. Out-of-state owners and people who have inherited a home are another good source for finding properties to flip. Sometimes owners who live in a different state aren’t cut out to be long distance landlords, because they don’t understand how remote real estate investing works. People who inherit real estate may not want the house in the first place and may be very willing to sell for cheap to an investor who offers to close fast. 5 steps to flip a house Flipping a house is a little bit different from buying a turnkey rental property. You need to understand the market trends to buy low, accurately estimate the cost of repairs and how long they will take, and predict the price you can sell at while still making a profit. There are five steps to follow to flip a house: 1. Market research Buying low and selling high is much easier said than done. In order to make a profit flipping a house, you need to choose a property that offers enough upside potential after the cost of repairs has been factored in. Generally speaking, middle-income and working-class homes in 2, 3, and 4-star neighborhoods are the best places to find a house to flip. 2. Estimate repair and update costs Successful house flippers focus on homes that need inexpensive cosmetic repairs such as paint and flooring, updated fixtures like sinks and faucets, and new stainless steel appliances. It’s much easier to estimate repair costs like these versus trying to fix structural problems like a cracked wall or foundation, which require the use of a licensed general contractor and pulling permits from the city. Real estate investors who flip houses for a living use the 70% Rule to calculate the maximum offer price on a house being flipped. You’ll need to know the cost of repairs and the after repair value (ARV) to see if a deal makes sense. Then you can use this formula to determine the maximum offer price on a house that is being flipped: Maximum Offer Price = 70% of ARV – Repair Cost If the ARV of a home you are considering flipping is $150,000 and the needed repairs are $15,000, the maximum offer price you could make is $90,000: 3. Arrange financing Two good options for paying for a house to flip is to use all cash or a short-term hard money loan. Traditional buy-and-hold real estate investors normally use leverage to increase the overall return on investment. On the other hand, house flippers try to move in and out of a deal quickly and try not to accumulate debt or make interest payments that could eat into potential profits on the upside of the flip. Also, most conventional lenders won’t make a loan on a home that is being flipped, due to the perceived increased risk. 4. Network with contractors Contractors, handymen, and material suppliers will usually give you better pricing when they know you’ll be sending a constant flow of business their way. By creating a network of trusted and cost-effective contractors, you’ll reduce the risk that the cost of repairs will be higher than expected. One of the worst things that can happen when you are flipping a home is to have the purchase price plus the cost of repair exceed the fair market value of the house. If that happens, you’ll be flipping the home at a loss, something no real estate investor wants to do. 5. Buy a property to flip After you’ve thoroughly analyzed the local real estate market and have located a property that would make a good flip, the next step is to put the property under contract. Because you’re making an offer at a below-market price, sellers will expect a purchase contract with very few contingencies and a fast close of escrow. Have your team of contractors ready to begin making repairs the day escrow closes, because the quicker you get the updating done the faster you can sell the property and make some money. How to maximize sales profits One way to sell a house you are flipping is to an owner-occupant who will be happy to pay more for a nice home that is fully renovated and updated. To do that, you’ll probably have to list the home for sale on the local MLS and pay a real estate agent commission of 5% or 6% that will eat into the profits of the home you are flipping. A good way to maximize your potential profits when you flip a home is to rent the home to a tenant. You’ll need to conduct thorough tenant screening, and “season” the tenant by making sure the rent is paid on time for several months. After that, you can list the home on the Roofstock Marketplace and market the home you are flipping as a turnkey rental property to qualified real estate investors. Sales fees are about half of what you would pay compared to the MLS, and rental property investors may be willing to pay more for a home that is completely renovated and rented to good tenants. Although selling this way will lengthen the time between buying and flipping the house, you’ll have rental income from the tenant to offset the cost of holding the property for a few extra months. Be sure to crunch the numbers using both scenarios to see if the money you will save by selling on Roofstock is better than selling on the MLS. Where to get a loan to flip a house Conventional 30-year mortgages are the loan of choice for buy-and-hold real estate investors, but long-term loans aren’t really designed for investors who want to flip homes. Instead, you’ll need to be more creative about finding money to flip a house. Hard money Hard money lenders specialize in providing capital to real estate investors looking for a short-term loan to flip a home. Interest rates and fees are higher, with loan-to-value (LTV) ratios of 70% or less. Although the payments are higher, house flippers move in and out of deals pretty quickly, so the actual carrying costs are low. Plus, many hard money lenders will allow you to include the cost of repairs in the hard money loan. Private lending Private money lenders are people who understand that not every real estate investment is suitable for a traditional mortgage. Private lenders raise money from passive real estate investors who are interested in investing in debt instead of equity. Fees and interest rates will be higher, but because private lenders are able to think outside of the box, they can be another good source for finding money to flip a house. Joint ventures Another good option for finding money to flip a house is to form your own joint venture under an LLC. Passive partners in the LLC contribute their capital, while you do the active work of finding and flipping a house. There are several advantages to having an LLC, such as limited liability and pass-through taxation. But the biggest benefit to using an LLC to flip a house is that you can share the potential profits any number of ways, depending on how the operating agreement is written and what the partners agree on. For example, even if your partners contribute most of the capital, you could be compensated with a greater share of the profits because you are the one doing all of the work. Top house flipping mistakes to avoid While flipping a house can be potentially very lucrative, there are some potential risks to be aware of as well. These are some of the common mistakes that beginning house flippers make: Lack of funds due to repairs costs being higher than anticipated or overestimating the ARV of the home being flipped. Underestimating the time it takes to find, update, and flip a house, especially if you are trying to do everything on your own. Not having a qualified team of trusted, cost-effective contractors and handymen that can help your repair costs come in at budget. Jumping at the first opportunity that comes along instead of taking the time to research and analyze the local real estate market to identify good houses to flip in middle-income and working-class neighborhoods. Final thoughts on flipping homes If you’re thinking about starting to flip houses, it’s best to play it safe and err on the side of caution. Start with a house that only needs cosmetic repairs, in a market that you know extremely well. Even if you only make a small profit on your first flip, you’ll be gaining valuable experience and a track record of success that will help to raise money from other investors to fund your house-flipping business. This article, and the Roofstock Blog in general, is intended for informational and educational purposes only, and is not investment, tax, financial planning, legal, or real estate advice. Roofstock is not your advisor or agent. Please consult your own experts for advice in these areas. Although Roofstock provides information it believes to be accurate, Roofstock makes no representations or warranties about the accuracy or completeness of the information contained on this blog. Jeff has over 25 years of experience in all segments of the real estate industry including investing, brokerage, residential, commercial, and property management. While his real estate business runs on autopilot, he writes articles to help other investors grow and manage their real estate portfolios. As a resource to investors, Roofstock may provide contact information or links to lending, insurance, property management, or other financial or professional service providers. In providing this information, Roofstock does not recommend or endorse any third-party provider nor guarantee their services. Roofstock may receive compensation or other financial benefits from service providers that market on this site, as authorized by law.
How to start flipping houses for a profit in 2022 If you’re looking for a way to make quick profits in real estate, house flipping just might be the way to go. To be sure, making money flipping houses isn’t as easy as they make it seem on TV. House flipping requires a lot of hard work, expertise, and patience. But when you know how to do it, flipping houses can be a potentially lucrative short-term real estate investment. What is house flipping? Real estate investors who flip houses buy the property at a discount, make any needed repairs and renovations, and sell the property at a profit. In other words, house flipping follows the classic investment strategy of buying low and selling high. Housing flipping can be a potentially profitable way to invest in real estate when there is more demand for homes than there is supply, as in many real estate markets today. Most homebuyers don’t have the time, energy, money, or knowledge to find deals and do their own repairs. But, they may be more than willing to pay a good price for a home where all of the work has already been done. Popular properties for flipping Not every house is a good candidate for flipping. Popular properties that can be good for flipping usually fall into one of three general categories: The first option is older homes in need of repair. In today’s economy, there are homeowners who simply don’t have the money to make the necessary repairs to get top dollar for their homes. Real estate investors who have the capital can take advantage of opportunities like these. Short-sales are another good option for finding property to flip. Homeowners who have missed several mortgage payments and are in the process of getting foreclosed on sometimes try to sell short – or sell for less than the current mortgage balance – in order to avoid the embarrassment of foreclosure and a bad credit score. REO homes, which means “real estate owned” by the bank, is property that has already been foreclosed on.
yes
Real Estate
Can One Make Quick Profit Flipping Houses?
yes_statement
one can make "quick" "profit" "flipping" "houses".. it is possible to make "quick" "profit" by "flipping" "houses".
https://www.artsy.net/article/artsy-editorial-flipping-art-controversial
Why “Flipping” Art Is so Controversial | Artsy
“Flipping” an asset loosely means buying and swiftly reselling it to make a quick profit. You can flip stocks, trading them within 24 hours. If you want to flip houses, you can buy them, fix them up, put them back on the market, and—if you’re lucky—get your own hit HGTV show. In the art world, however, flipping is a dirty word. “It’s disgusting,” said art adviser Lisa Schiff, noting how speculative buying practices can harm young artists’ careers. Speaking more subtly, Dr. Elizabeth Pergam, who teaches at Sotheby’s Institute of Art, noted that rapid run-ups in auction prices lead to a widely-held idea that flipping is “not healthy for the market.” Regardless of the data, art flipping carries a significant stigma in an industry that runs on relationships and reputations. Who flipping really hurts Most artworks, unlike stocks and houses, are made by working artists who rely on sales to support their creative practices. Artists often work with gallerists, who sell their artworks to institutions and collectors and give them a large portion of the proceeds. Such private exchanges are considered primary market sales. Once a collector decides to put an artwork up for auction, it enters the secondary market. Artists don’t benefit from any of these sales, except in the few jurisdictions with resale royalty laws; most of the money goes to the consignors and auction houses. Heather Bhandari, an independent curator and educator who previously worked as a director at the Chelsea gallery Mixed Greens (which closed in 2015), believes that the art world frowns upon flipping in part “because the buyer rarely does anything to help increase the value of the work—they profit from artists’ continued hard work without paying anything to the artists.” For Pergam, buying and selling a single work of art within 5 to 10 years is a rapid turnover, and constitutes flipping. Schiff prefers a you-know-it-when-you-see-it definition. “It’s always obvious,” she said. It happens when “this artist is too young to be at auction.” From Schiff’s perspective, a painting by Christina Quarles, who’s in her mid-thirties, doesn’t have any place sharing an auction catalog with canvases by Kerry James Marshall or Gerhard Richter. The latter artists are far more established, with roughly 30 and 50 additional years of artmaking to their names. They’ve had adequate time to develop their work, without facing the pressures of the art market. No one’s too concerned about flipping if the artist is well-established or dead—nobody is bothered on behalf of Jeff Koons, Takashi Murakami, or Jean-Michel Basquiat. But if an artist is just gaining traction in the art world, flipping can lead to spiking prices and ultimately destroy a career. The trajectories of artists including Lucien Smith, Anselm Reyle, and Christian Rosa illustrate the dangers of flipping. Smith’s work began appearing at auction in 2013, when he was in his early 20s. Artnet’s price database lists 116 results since then. Smith’s first canvas to go to auction, Hobbes, The Rain Man, and My Friend Barney / Under the Sycamore Tree (2011), sold for $389,000—well over twice its high estimate of $150,000. In recent years, however, Smith’s works have increasingly been “bought in” (not sold) or gone for prices in the $5,000–$20,000 range. In a 2015 article for Bloomberg, James Tarmy tracked a similar downfall for Reyle’s market. In 2007, before the artist turned 40, a work of his fetched $634,000 despite its high estimate of $51,000. “In one year, Reyle’s record at auction had increased by more than 1,000 percent,” Tarmy wrote. After the financial crisis hit in 2008, however, Reyle’s market toppled. That year, a third of his work, according to Tarmy, sold below its estimate or not at all. “The bottom line: A work by the once-hot artist Anselm Reyle sold last year for about $66,000, $30,000 less than it fetched four years ago,” Tarmy concluded. Unable to sustain his studio costs, Reyle had to temporarily retire from painting. What are flipping’s macro repercussions? While most agree that flipping hurts young artists, it’s less clear how the practice impacts the market as a whole. Doomsayers assert that today’s flipping is a new, ramped-up phenomenon—indicative of a vulgar, fad-obsessed class of collectors—which is bad news for the art market as a whole. Yet in an article for the New York Times, Lorne Manley and Robin Pogrebin found that the pace for turning over art in 2013 “was only slightly faster than it was in the mid-1990s, signaling that the reselling may be just the latest iteration of a historical cycle, not a lasting change.” New ways of tracking who’s hot and who’s not, however, are making flipping a more in-your-face phenomenon. ArtRank, which started as an art fund’s algorithm back in 2012, ranks artists by their investment potentials. Categories include “Buy Under $10,000,” “Buy Under $30,000,” “Sell/Peaking,” and “Undervalued Blue Chip.” Not a single image of an artwork adorns the site’s welcome page, suggesting that aesthetics, creativity, ingenuity, and self-expression—values that draw many people to art in the first place—have been totally superseded by quantitative concerns. A cure for flipping? Flipping isn’t good for galleries, which play a key role in nurturing artists’ careers. While they can put clauses in contracts, restricting collectors’ terms of resale, Pergam noted that “lawyers say those clauses are unenforceable.” Yet “if you buy art from respected galleries and they know you’ve sold it within 5, 10 years, they’re not going to sell you another one.” Auctions place no such restrictions on buyers. If you’re the highest bidder, you win the work—no matter your art world reputation. It’s not just young, greedy collectors, new to the market, who flip, either. According to the New York Times article, artist Peter Doig accused Charles Saatchi of flipping. Collector Stefan Simchowitz has even advocated the practice. Both men have amassed extremely influential collections. Galleries aren’t going to stop selling to them over infractions that make up a small piece of their larger portfolios. Schiff asserted that art advisers must be vigilant, too. She said speculative buyers—“fake people”—try to hire her and her peers “as beards for bad behavior.” The art world itself weeds out art advisers whose clients speculate: If her clients flipped, Schiff noted, “I wouldn’t have any clients or be an adviser. I would get cut off from every gallery.” Schiff added that she “loves the art world and artists” and has “no desire to destroy careers.” A client’s profit in the short term could diminish her livelihood, an artist’s, and that of the gallerist who sold the client the flipped work. According to artnet News, to guard against the practice David Zwirner has imposed financial penalties on salespeople who sell work that ends up on the auction block too quickly. Both Schiff and Bhandari believe the onus is on auction houses to curb flipping. Schiff doesn’t think auctions should be sourcing and selling art by artists who haven’t reached a certain stage in their careers. Bhandari, on the other hand, advocates resale royalties. “Regardless of when the secondary market sale happens, artists should share in the profit,” she said. “It is a way to bring more equity to the market. In other creative industries, royalties are a necessary income stream.” In the music industry, if someone wants to use your song for an advertisement, you often make money. Scholar Amy Whitaker believes that using blockchain to track an artwork could be a viable solution. For her part, Pergam thinks the media and celebrity culture have helped feed the booming interest in contemporary art, which has led to flipping. “A lot can be attributed to the success of Art Basel in Miami [Beach] and that culture that’s sort of about glitziness,” she said. If rappers and movie stars are snapping up major works and going to art parties, perhaps bad actors will be encouraged to get in on the art-buying action—and turn a quick profit to further their exploits. That genie, however, doesn’t seem to be going back in the bottle anytime soon. Yet it seems silly to point fingers at art-enthusiast celebrities, especially those who are bringing significant, celebratory attention to young artists and artists of color. Perhaps these celebrities could throw a little shade at flippers when they get the profile treatment, or publicly shame such get-rich-quick opportunists at parties. If these speculative buyers won’t listen to dealers at mid-tier galleries, maybe they’ll listen to Leonardo DiCaprio.
“Flipping” an asset loosely means buying and swiftly reselling it to make a quick profit. You can flip stocks, trading them within 24 hours. If you want to flip houses, you can buy them, fix them up, put them back on the market, and—if you’re lucky—get your own hit HGTV show. In the art world, however, flipping is a dirty word. “It’s disgusting,” said art adviser Lisa Schiff, noting how speculative buying practices can harm young artists’ careers. Speaking more subtly, Dr. Elizabeth Pergam, who teaches at Sotheby’s Institute of Art, noted that rapid run-ups in auction prices lead to a widely-held idea that flipping is “not healthy for the market.” Regardless of the data, art flipping carries a significant stigma in an industry that runs on relationships and reputations. Who flipping really hurts Most artworks, unlike stocks and houses, are made by working artists who rely on sales to support their creative practices. Artists often work with gallerists, who sell their artworks to institutions and collectors and give them a large portion of the proceeds. Such private exchanges are considered primary market sales. Once a collector decides to put an artwork up for auction, it enters the secondary market. Artists don’t benefit from any of these sales, except in the few jurisdictions with resale royalty laws; most of the money goes to the consignors and auction houses. Heather Bhandari, an independent curator and educator who previously worked as a director at the Chelsea gallery Mixed Greens (which closed in 2015), believes that the art world frowns upon flipping in part “because the buyer rarely does anything to help increase the value of the work—they profit from artists’ continued hard work without paying anything to the artists.” For Pergam, buying and selling a single work of art within 5 to 10 years is a rapid turnover, and constitutes flipping. Schiff prefers a you-know-it-when-you-see-it definition.
yes
Rheumatoid
Can Rheumatoid Arthritis be diagnosed with a blood test?
yes_statement
"rheumatoid" arthritis can be "diagnosed" with a "blood" "test".. a "blood" "test" can be used to "diagnose" "rheumatoid" arthritis.
https://www.healthline.com/health/rheumatoid-arthritis/rheumatoid-arthritis-test
6 Rheumatoid Arthritis Blood Tests, Plus Other Diagnostic Tools
An erythrocyte sedimentation rate (ESR) test evaluates how much inflammation is present in your body. The test measures how quickly your red blood cells, called erythrocytes, separate from your other blood cells in a lab when they are treated with a substance that prevents clotting. Red blood cells clump together when there’s inflammation in your body, making them separate from your other blood cells much faster. Low ESR levels indicate low levels of inflammation while high ESR results indicate high levels of inflammation. Doctors use this test to diagnose rheumatoid arthritis because this condition causes inflammation throughout your body. An ESR test on its own, however, is not enough to diagnose rheumatoid arthritis. Inflammation and a rise in ESR levels can be caused by other chronic conditions, and by infections or injuries. However, your ESR rate can help point doctors in the right direction. For example, very elevated ESR levels would likely indicate an infection and not rheumatoid arthritis. A C-reactive protein (CRP) test looks for the amount of CRP protein in your bloodstream. CRP is a protein produced by your liver. Your liver releases CRP when there’s an infection in your body. CRP helps start your immune system response to the infection. This leads to inflammation throughout your body. Autoimmune conditions, such as rheumatoid arthritis, can result in high levels of CRP in your bloodstream. A CRP test measures CRP and indicates the presence of inflammation. Similar to an ESR test, a CRP test can’t confirm rheumatoid arthritis on its own. However, it can give doctors a good idea of how much inflammation is present in your body and how active your immune system is. A full blood count, also known as a complete blood count (CBC), evaluates the cells that make up your blood. This includes your white blood cells, red blood cells, and platelets. When you’re healthy, your body can make, release, and regulate the amount of each type of blood cell you need for body functions. Rheumatoid arthritis doesn’t typically cause a disruption to your blood cells, but many conditions with similar symptoms do. A CBC with very abnormal results might indicate rheumatoid arthritis isn’t the right diagnosis. Rheumatoid factors are immune system proteins that sometimes attack the healthy tissue in your body. A rheumatoid factor test measures the level of rheumatoid factor proteins in your bloodstream. High levels of rheumatoid factors often point to rheumatoid arthritis, as well as Sjogren’s syndrome, and other autoimmune conditions. Results that show a high level can be helpful in confirming a rheumatoid arthritis diagnosis. However, people without autoimmune conditions sometimes have a high level of rheumatoid factor proteins, and not everyone with rheumatoid arthritis has a high level of rheumatoid factor proteins. Cyclic citrullinated peptide (CCP) antibodies are a type of immune system protein called an autoantibody. Autoantibodies are abnormal proteins that attack healthy blood cells and tissues. Between 60 and 80 percent of people with rheumatoid arthritis have CCP antibodies in their blood. An anti-CCP antibody test — also called an ACCP test or CCP-test — looks for the presence of these antibodies to help confirm rheumatoid arthritis. An anti-CCP test can also help doctors determine the severity of a rheumatoid arthritis case. High levels of CCP at diagnosis indicate an increased risk for the fast progression of joint damage. Doctors typically perform both a rheumatoid factor (RF) test and an anti-CCP test when evaluating a person they suspect may have rheumatoid arthritis. A positive result for either test indicates a higher risk for RA, and that risk is increased when both tests are positive. That said, both tests are negative in up to 50 percent of people with RA, and the tests remain negative during follow-up testing in 20 percent of those with RA. Antinuclear antibodies (ANA) are a type of autoantibody produced by your immune system. They act abnormally and attack healthy tissues and cells. The presence of ANAs can indicate an autoimmune condition. ANA testing looks for the presence of ANAs and can help confirm a rheumatoid arthritis diagnosis. Blood tests aren’t the only method that can be used to diagnose rheumatoid arthritis. You might also have a variety of other tests done to help confirm rheumatoid arthritis. These include: Physical assessment. A physical assessment can help determine how much your symptoms are impacting your daily life. You might be asked how well you can do daily tasks such as showering, eating, and dressing. A physical therapist might also assess your grip, walk, and balance. Joint scan. A joint scan looks for inflammation and damage in your joints. It can help confirm a rheumatoid arthritis diagnosis. Imaging tests. X-rays and MRIs create detailed pictures of your bones, muscles, and joints that can help diagnose rheumatoid arthritis. There’s no single test that can confirm rheumatoid arthritis. However, multiple blood tests can help indicate rheumatoid arthritis is the correct diagnosis. Blood tests look for the presence of inflammation and immune system proteins that often go along with rheumatoid arthritis. The results of these tests can be used along with imaging tests and an assessment of your symptoms to diagnose rheumatoid arthritis. Last medically reviewed on October 25, 2021 How we reviewed this article: Healthline has strict sourcing guidelines and relies on peer-reviewed studies, academic research institutions, and medical associations. We avoid using tertiary references. You can learn more about how we ensure our content is accurate and current by reading our editorial policy.
A CBC with very abnormal results might indicate rheumatoid arthritis isn’t the right diagnosis. Rheumatoid factors are immune system proteins that sometimes attack the healthy tissue in your body. A rheumatoid factor test measures the level of rheumatoid factor proteins in your bloodstream. High levels of rheumatoid factors often point to rheumatoid arthritis, as well as Sjogren’s syndrome, and other autoimmune conditions. Results that show a high level can be helpful in confirming a rheumatoid arthritis diagnosis. However, people without autoimmune conditions sometimes have a high level of rheumatoid factor proteins, and not everyone with rheumatoid arthritis has a high level of rheumatoid factor proteins. Cyclic citrullinated peptide (CCP) antibodies are a type of immune system protein called an autoantibody. Autoantibodies are abnormal proteins that attack healthy blood cells and tissues. Between 60 and 80 percent of people with rheumatoid arthritis have CCP antibodies in their blood. An anti-CCP antibody test — also called an ACCP test or CCP-test — looks for the presence of these antibodies to help confirm rheumatoid arthritis. An anti-CCP test can also help doctors determine the severity of a rheumatoid arthritis case. High levels of CCP at diagnosis indicate an increased risk for the fast progression of joint damage. Doctors typically perform both a rheumatoid factor (RF) test and an anti-CCP test when evaluating a person they suspect may have rheumatoid arthritis. A positive result for either test indicates a higher risk for RA, and that risk is increased when both tests are positive.
yes
Rheumatoid
Can Rheumatoid Arthritis be diagnosed with a blood test?
yes_statement
"rheumatoid" arthritis can be "diagnosed" with a "blood" "test".. a "blood" "test" can be used to "diagnose" "rheumatoid" arthritis.
https://www.arthritis.org/diseases/more-about/testing-for-rheumatoid-arthritis
Testing for Rheumatoid Arthritis | Arthritis Foundation
Microbiome, microbes, microorganisms – these terms may be confusing, but the types of bacteria living in and on our bodies can impact arthritis. Learn what helps or harms the microbiome and the health of your gut and discover dietary changes that can make a difference. This episode was originally released on January 19, 2021. Movement is the best medicine, even when your joints hurt. Your Exercise Solution (YES) is a resource to help you create a physical activity routine with modifications developed and approved by physical therapists. Join the movement and make an impact by honoring those who rock your world at the Arthritis Foundation’s signature walk event, Walk to Cure Arthritis. Register as an individual or form a team and Rock the Walk in your community! Understand the lab and imaging tests used to diagnose and monitor disease activity in RA. By Mary Anne Dunkin | June 12, 2022 Diagnosing rheumatoid arthritis (RA) can take time. Like other forms of arthritis, a diagnosis is based largely on the findings from a medical exam and your symptoms. These may include joint pain, tenderness and swelling that affects the same joint or joints on both sides of your body (like both wrists or both knees); fatigue and fever. Lab tests and imaging tests can help your doctor make the diagnosis. Diagnostic Lab Tests Evidence of RA may be seen in the blood, so blood tests play an important role in making a diagnosis. Following are some of the tests your doctor may order. Erythrocyte sedimentation rate (ESR or sed rate). The ESR can gauge how much inflammation is in your body by measuring how quickly red blood cells (erythrocytes) separate from other cells in the blood and collect as sediment in the bottom of a test tube. Because inflammation can be caused by conditions other than RA, the results must be considered along with those of other tests when making an RA diagnosis. C-Reactive Protein (CRP). This measures levels of CRP, a protein produced by the liver that signals inflammation. High CRP levels are common in RA and other inflammatory forms of arthritis. Because a high CRP may be present with many diseases and conditions, a high CRP in itself does not mean you have arthritis or identify which form you may have. The results must be interpreted in the context of your symptoms as well as the results of other tests. Rheumatoid factor (RF). Rheumatoid factor is a protein made by the immune system which may attack healthy tissues. High levels of rheumatoid factor could help your doctor make a diagnosis of RA. However, RF levels may also be high in other autoimmune diseases, so an RF test alone cannot be used to diagnose RA. Anti-CCP antibody test (ACCP or CCP). This test is for a type of autoantibody called cyclic citrullinated peptide (CCP) antibodies, which can be found in the blood of 60% to 80% of people with rheumatoid arthritis. The test is often conducted along with an RF test. Antinuclear antibody test (ANA). Antinuclear antibodies (ANA) are a type of autoantibody, a protein that attacks your body’s own tissues. The presence of ANAs can indicate an autoimmune condition, including RA. Diagnostic Imaging Tests Imaging tests, along with the physical exam and laboratory tests, can help identify RA. These imaging tests may be used to diagnose RA. X-ray. X-rays can show bone damage, characteristic of RA, where they meet at joints. They are a common tool in diagnosis; however, because it damage from inflammation develops over time and may not be visible via X-ray early on, it may not be useful for diagnosing early RA. Magnetic resonance imaging (MRI). MRI is procedure in which radio waves and a powerful magnet linked to a computer are used to create 3D images of structures inside the body. MRI can show changes in cartilage and bone that are indicative of RA. Ultrasound. Ultrasound, or sonography, uses sound waves to create pictures of structures inside the body. THis may be used to view changes in bones and cartilage suggestive of RA before any changes show up on X-ray. Other benefits of ultrasound include its relatively low cost and the fact it doesn’t expose the body to radiation, like X-ray. Computed tomagraphy (CT) scan. A CT scan is an imaging procedure that combines a series of X-ray images to create cross-sectional images of parts of the body. Studies show CT scans may be effective for viewing early bone erosions that occur with RA. Monitoring Lab Tests Some of the same lab and imaging tests used in diagnosing RA are also used to monitor disease progression and response to treatment. Your doctor may order other tests to look for side effects of medications used to treat RA or effects of the disease itself. Your doctor may order some of these lab tests during your treatment. Erythrocyte sedimentation rate (ESR or sed rate). A reduced sed rate is an indication that inflammation is being controlled. C-Reactive Protein (CRP). As with sed rate, lower levels of CRP indicate that inflammation is being controlled. The MBDA test (Vectra DA). This blood test checks for 12 proteins, hormones and growth factors. It gives your doctor a single disease activity score that can indicate how aggressive your disease is, how likely you are to have a flare when stopping medications and what drug combinations may work best for you. Complete Blood Count (CBC). While the CBC won’t necessarily tell your doctor how active your disease it is, components of the test can help if you have complications from RA or its treatment. For example, low red blood cell levels indicate anemia, which is common in people with RA. Low white blood cells, which are needed to fight infection, and low platelets, which are needed to make blood clot, can sometimes occur in people taking biologics. Liver enzyme (SGOT, SGPT, bilirubin, alkaline phosphatase). Measuring levels of enzymes in the blood can help your doctor determine if you have liver damage, which may be related to RA treatment, an associated autoimmune condition or RA itself. Hematocrit (HCT) and hemoglobin (Hgb). These tests measure your number and quality of red blood cells. Lower red blood cell counts may mean medications, such as NSAIDs or corticosteroids, are causing gastrointestinal bleeding. Lipid panel. Because some medications for RA, such as interleukin inhibitors and JAK inhibitors, may cause increases in your triglyceride and cholesterol levels, your doctor may check those levels during RA treatment and prescribe medication to lower lipid levels if necessary. Kidney function tests. Lab tests performed on your blood and urine can tell your doctor how well your kidneys are removing waste products from the body. Kidney damage may occur due to RA itself or medications used to treat it, including nonsteroidal anti-inflammatory drugs (NSAIDs), disease-modifying antirheumatic drugs (DMARDs), corticosteroids and biologics. MonitoringImaging Tests A variety of imaging tests may be used to monitor joint damage resulting from inflammation. They may be the same as those used in diagnosing RA, including Tips to Tackle RA Stay in the Know. Live in the Yes. Get involved with the arthritis community. Tell us a little about yourself and, based on your interests, you’ll receive emails packed with the latest information and resources to live your best life and connect with others. Donate Every gift to the Arthritis Foundation will help people with arthritis across the U.S. live their best life. Volunteer Join us and become a Champion of Yes. There are many volunteer opportunities available. Live Yes! INSIGHTS Take part to be among those changing lives today and changing the future of arthritis. Partner Proud Partners of the Arthritis Foundation make an annual commitment to directly support the Foundation’s mission. Donate Ways to Give Every gift to the Arthritis Foundation will help people with arthritis across the U.S. live their best life. Whether it is supporting cutting-edge research, 24/7 access to one-on-one support, resources and tools for daily living, and more, your gift will be life-changing. Help millions of people live with less pain and fund groundbreaking research to discover a cure for this devastating disease. Please, make your urgently-needed donation to the Arthritis Foundation now! Other Ways to Give Volunteer Volunteer Opportunities The Arthritis Foundation is focused on finding a cure and championing the fight against arthritis with life-changing information, advocacy, science and community. We can only achieve these goals with your help. Strong, outspoken and engaged volunteers will help us conquer arthritis. By getting involved, you become a leader in our organization and help make a difference in the lives of millions. Join us and become a Champion of Yes. Live Yes! INSIGHTS Give Just 10 Minutes. Tell us what matters most to you. Change the future of arthritis. By taking part in the Live Yes! INSIGHTS assessment, you’ll be among those changing lives today and changing the future of arthritis, for yourself and for 54 million others. And all it takes is just 10 minutes. Your shared experiences will help: - Lead to more effective treatments and outcomes - Develop programs to meet the needs of you and your community - Shape a powerful agenda that fights for you Now is the time to make your voice count, for yourself and the entire arthritis community. Currently this program is for the adult arthritis community. Since the needs of the juvenile arthritis (JA) community are unique, we are currently working with experts to develop a customized experience for JA families. How are you changing the future? By sharing your experience, you’re showing decision-makers the realities of living with arthritis, paving the way for change. You’re helping break down barriers to care, inform research and create resources that make a difference in people’s lives, including your own. Partner Meet Our Partners As a partner, you will help the Arthritis Foundation provide life-changing resources, science, advocacy and community connections for people with arthritis, the nations leading cause of disability. Join us today and help lead the way as a Champion of Yes.
These may include joint pain, tenderness and swelling that affects the same joint or joints on both sides of your body (like both wrists or both knees); fatigue and fever. Lab tests and imaging tests can help your doctor make the diagnosis. Diagnostic Lab Tests Evidence of RA may be seen in the blood, so blood tests play an important role in making a diagnosis. Following are some of the tests your doctor may order. Erythrocyte sedimentation rate (ESR or sed rate). The ESR can gauge how much inflammation is in your body by measuring how quickly red blood cells (erythrocytes) separate from other cells in the blood and collect as sediment in the bottom of a test tube. Because inflammation can be caused by conditions other than RA, the results must be considered along with those of other tests when making an RA diagnosis. C-Reactive Protein (CRP). This measures levels of CRP, a protein produced by the liver that signals inflammation. High CRP levels are common in RA and other inflammatory forms of arthritis. Because a high CRP may be present with many diseases and conditions, a high CRP in itself does not mean you have arthritis or identify which form you may have. The results must be interpreted in the context of your symptoms as well as the results of other tests. Rheumatoid factor (RF). Rheumatoid factor is a protein made by the immune system which may attack healthy tissues. High levels of rheumatoid factor could help your doctor make a diagnosis of RA. However, RF levels may also be high in other autoimmune diseases, so an RF test alone cannot be used to diagnose RA. Anti-CCP antibody test (ACCP or CCP). This test is for a type of autoantibody called cyclic citrullinated peptide (CCP) antibodies, which can be found in the blood of 60% to 80% of people with rheumatoid arthritis. The test is often conducted along with an RF test.
yes
Rheumatoid
Can Rheumatoid Arthritis be diagnosed with a blood test?
yes_statement
"rheumatoid" arthritis can be "diagnosed" with a "blood" "test".. a "blood" "test" can be used to "diagnose" "rheumatoid" arthritis.
https://www.webmd.com/rheumatoid-arthritis/blood-tests
Blood Tests to Diagnose Arthritis
Blood Tests for RA and Other Autoimmune Conditions Blood Tests to Diagnose Arthritis Your doctor will use several blood tests to help diagnose you with rheumatoid arthritis (RA) and other inflammatory conditions. Blood tests are usually fast. The doctor sends you to a lab where a worker puts a needle into one of your veins. They take, or "draw," blood into several test tubes. The tests take a few days, and the doctor will call you to go over the results. Common blood tests for rheumatoid arthritis include: What it measures: Rheumatoid factor is a group of proteins your body makes when your immune system attacks healthy tissue. What’s normal: 0-20 u/mL (units per milliliter of blood) What’s high: 20 u/mL or higher What it means: About 70% to 90% of people with a high reading have RA. But people without RA can still have rheumatoid factor. In general, if you have RA but don’t have high RF, your disease will be less severe. RF levels may stay high even if you go into remission. What it measures: Proteins your body makes when there is inflammation. You’ll probably have it done along with the RF test. What’s normal: 20 u/mL or less What it means: This test offers a way to catch RA in its early stages. Levels are high in people who have RA or those who are about to get it. A positive test means there’s a 97% chance you have RA. If you have anti-CCP antibodies, your rheumatoid arthritis might be more severe. Other conditions you might have: None. This test is used only to look for RA.  Erythrocyte sedimentation rate (ESR) What it measures: The speed at which your red blood cells clump and fall together to the bottom of a glass tube within an hour. Your doctor might call it a sed rate. What’s normal: Men younger than 50: 0-15 mm/h (millimeters per hour) Men older than 50: 0-20 mm/h Women younger than 50: 0-20 mm/h Women older than 50: 0-30 mm/h What it means: In healthy people, the ESR is low. Inflammation makes cells heavier, so they fall faster. Higher levels tend to happen along with active disease, though not exactly. Other conditions you might have: A high ESR rate doesn't point to any particular disease, but it's a general sign of how much inflammation is in your body. It could be tied to disease activity if you have: What it measures: A protein your liver makes when inflammation is present What’s normal: Generally, less than 10 milligrams per liter, but results vary from person to person and from lab to lab What it means: CRP levels often go up before you have symptoms, so this test helps doctors find the disease early. A high level suggests significant inflammation or injury in your body. Doctors also use this test after you’re diagnosed to monitor disease activity and to understand how well your treatment is working. What it measures: This series of tests measures the presence of certain unusual antibodies in your blood. What’s normal: These tests are measured in titer, a ratio for the lowest mix of a solution and a substance at which a reaction takes place. A value of 1:40 dilution (or 1 part antibodies to 40 parts solution) is negative. If the ANA is positive, you may have an autoimmune disorder, but the test alone can't make a reliable diagnosis. If the ANA is negative, it is likely that you don't have one. Other conditions you might have: The profile helps your doctor look for diseases such as: It is not an abnormal finding: 8%-10% of white people may have it, though most do not have a disease. What it means: HLA-B27 is a gene that’s linked to a group of conditions (you might hear it called a genetic marker) known as spondyloarthropathies. They involve joints and the places where ligaments and tendons attach to your bones. What it means: You might have an inflammatory muscle disease. Higher levels of CPK can also show up after trauma, injections into a muscle, muscle disease due to an underactive thyroid, and while taking certain medications such as cholesterol-lowering drugs called statins. What it measures: More than 30 blood proteins that work together in your immune system during an inflammatory response. Complement proteins can get used up during this process. What’s normal: Serum CH50: 30-75 u/mL (units per milliliter) Serum C3: Men: 88-252 mg/dL (milligrams per deciliter) Women: 88-206 mg/dL Serum C4: Men: 12-72 mg/dL Women: 13-75 mg/dL What it means: Lower levels of all three components may signal lupus and vasculitis, or inflamed blood vessels. If you have lupus with kidney disease, your doctor may continue to give you this test because levels rise and fall along with disease activity. Testing for Other Autoimmune Conditions What’s normal: A negative result (no antibodies in your blood), or a titer of less than 1:20 What it means: You have a form of vasculitis, or inflamed blood vessels. You may get this test after you’re diagnosed, too. It helps your doctor see how your disease is progressing, though the link to disease activity isn’t perfect. Other conditions you might have: Granulomatosis with polyangiitis Microscopic polyangiitis Churg-Strauss syndrome Show Sources SOURCES: American Association for Clinical Chemistry: "Rheumatoid Factor." National Institute of Arthritis and Musculoskeletal and Skin Diseases: "How is Rheumatoid Arthritis Diagnosed?"
Blood Tests for RA and Other Autoimmune Conditions Blood Tests to Diagnose Arthritis Your doctor will use several blood tests to help diagnose you with rheumatoid arthritis (RA) and other inflammatory conditions. Blood tests are usually fast. The doctor sends you to a lab where a worker puts a needle into one of your veins. They take, or "draw," blood into several test tubes. The tests take a few days, and the doctor will call you to go over the results. Common blood tests for rheumatoid arthritis include: What it measures: Rheumatoid factor is a group of proteins your body makes when your immune system attacks healthy tissue. What’s normal: 0-20 u/mL (units per milliliter of blood) What’s high: 20 u/mL or higher What it means: About 70% to 90% of people with a high reading have RA. But people without RA can still have rheumatoid factor. In general, if you have RA but don’t have high RF, your disease will be less severe. RF levels may stay high even if you go into remission. What it measures: Proteins your body makes when there is inflammation. You’ll probably have it done along with the RF test. What’s normal: 20 u/mL or less What it means: This test offers a way to catch RA in its early stages. Levels are high in people who have RA or those who are about to get it. A positive test means there’s a 97% chance you have RA. If you have anti-CCP antibodies, your rheumatoid arthritis might be more severe. Other conditions you might have: None. This test is used only to look for RA.  Erythrocyte sedimentation rate (ESR) What it measures: The speed at which your red blood cells clump and fall together to the bottom of a glass tube within an hour. Your doctor might call it a sed rate.
yes
Rheumatoid
Can Rheumatoid Arthritis be diagnosed with a blood test?
yes_statement
"rheumatoid" arthritis can be "diagnosed" with a "blood" "test".. a "blood" "test" can be used to "diagnose" "rheumatoid" arthritis.
https://www.arthritis.org/diseases/more-about/what-type-of-ra-do-you-have
Understanding Seronegative RA | Arthritis Foundation
Microbiome, microbes, microorganisms – these terms may be confusing, but the types of bacteria living in and on our bodies can impact arthritis. Learn what helps or harms the microbiome and the health of your gut and discover dietary changes that can make a difference. This episode was originally released on January 19, 2021. Movement is the best medicine, even when your joints hurt. Your Exercise Solution (YES) is a resource to help you create a physical activity routine with modifications developed and approved by physical therapists. Join the movement and make an impact by honoring those who rock your world at the Arthritis Foundation’s signature walk event, Walk to Cure Arthritis. Register as an individual or form a team and Rock the Walk in your community! Many people originally diagnosed with seronegative rheumatoid arthritis turn out to have a different type of joint disease. By Linda Rath | June 27, 2022 Some researchers believe rheumatoid arthritis (RA) isn’t a single disease but rather a collection of diseases. It might also be one disease with many different causes. However RA is eventually defined, there are two main subtypes in adults: seropositive and seronegative. In seropositive RA, blood tests show unusually high levels of antibodies called anti-cyclic citrullinated peptides (anti-CCPs). These are specific markers for RA and may show up as much as a decade before symptoms do. Around 60% to 80% of people diagnosed with RA have anti-CCPs. By definition, people with seronegative RA don’t have these antibodies in their blood, though that’s in some dispute. Doctors once used an antibody called rheumatoid factor (RF) to test for seropositivity. Most people with anti-CCPs also have RF, but so do people with lots of other conditions, including infections. That’s why anti-CCP is now the preferred test, though an RF test is often used in conjunction with it for greater accuracy. The Role of Blood Tests in RA Diagnosis No single blood test can reliably diagnose RA. Some healthy people test positive for anti-CCPs, while others who have RA have negative test results. Blood tests are just one of several factors, including a medical history, physical exam and X-rays, that help doctors diagnose the disease. Still, antibodies are a pretty good indicator of RA if you have also have joint pain and swelling and damage to bones and cartilage on imaging tests. Seronegative RA is more challenging and takes longer to diagnose because doctors try to rule out other types of arthritis that aren’t associated with high levels of anti-CCP, such as psoriatic arthritis, gout and spondyloarthritis. Still, seronegative RA remains an imprecise diagnosis, according to some experts. It’s rare for someone who is seronegative to become seropositive, but it’s not uncommon for a diagnosis of seronegative RA to be changed to something else later on. In one study of nearly 10,000 people diagnosed with seronegative RA, more than 500 were subsequently found to have spondyloarthritis, 275 had psoriatic arthritis and 245 had axial spondyloarthritis. Since these forms of arthritis mainly affect the low back and spine and RA affects the hands and feet, it seems the original diagnosis was based solely on the absence of anti-CCP. Another wrinkle: Some studies have found that about one-third of people diagnosed with seronegative RA actually have high levels of the same autoantibodies found in seropositive RA patients. Seropositive vs. Seronegative: Which is worse? In the debate about whether seropositive or seronegative patients have more severe disease, study results are mixed. A Dutch study found that people with seronegative disease had significantly more inflammation and disease activity than those with seropositive RA. And an international group of researchers reported a rare but particularly severe and destructive subtype of seronegative disease. Another study, however, reported similar disease activity and progression in both types of RA after two years. A growing amount of research is devoted to seropositive and seronegative RA, but more is needed. For now, if you’re diagnosed with seronegative RA, ask why your doctor arrived at that diagnosis and consider getting a second opinion. If you have severe symptoms, talk to your doctor about using the same treat-to-target approach and medications prescribed for seropositive RA. Tips to Tackle RA Stay in the Know. Live in the Yes. Get involved with the arthritis community. Tell us a little about yourself and, based on your interests, you’ll receive emails packed with the latest information and resources to live your best life and connect with others. Donate Every gift to the Arthritis Foundation will help people with arthritis across the U.S. live their best life. Volunteer Join us and become a Champion of Yes. There are many volunteer opportunities available. Live Yes! INSIGHTS Take part to be among those changing lives today and changing the future of arthritis. Partner Proud Partners of the Arthritis Foundation make an annual commitment to directly support the Foundation’s mission. Donate Ways to Give Every gift to the Arthritis Foundation will help people with arthritis across the U.S. live their best life. Whether it is supporting cutting-edge research, 24/7 access to one-on-one support, resources and tools for daily living, and more, your gift will be life-changing. Help millions of people live with less pain and fund groundbreaking research to discover a cure for this devastating disease. Please, make your urgently-needed donation to the Arthritis Foundation now! Other Ways to Give Volunteer Volunteer Opportunities The Arthritis Foundation is focused on finding a cure and championing the fight against arthritis with life-changing information, advocacy, science and community. We can only achieve these goals with your help. Strong, outspoken and engaged volunteers will help us conquer arthritis. By getting involved, you become a leader in our organization and help make a difference in the lives of millions. Join us and become a Champion of Yes. Live Yes! INSIGHTS Give Just 10 Minutes. Tell us what matters most to you. Change the future of arthritis. By taking part in the Live Yes! INSIGHTS assessment, you’ll be among those changing lives today and changing the future of arthritis, for yourself and for 54 million others. And all it takes is just 10 minutes. Your shared experiences will help: - Lead to more effective treatments and outcomes - Develop programs to meet the needs of you and your community - Shape a powerful agenda that fights for you Now is the time to make your voice count, for yourself and the entire arthritis community. Currently this program is for the adult arthritis community. Since the needs of the juvenile arthritis (JA) community are unique, we are currently working with experts to develop a customized experience for JA families. How are you changing the future? By sharing your experience, you’re showing decision-makers the realities of living with arthritis, paving the way for change. You’re helping break down barriers to care, inform research and create resources that make a difference in people’s lives, including your own. Partner Meet Our Partners As a partner, you will help the Arthritis Foundation provide life-changing resources, science, advocacy and community connections for people with arthritis, the nations leading cause of disability. Join us today and help lead the way as a Champion of Yes.
It might also be one disease with many different causes. However RA is eventually defined, there are two main subtypes in adults: seropositive and seronegative. In seropositive RA, blood tests show unusually high levels of antibodies called anti-cyclic citrullinated peptides (anti-CCPs). These are specific markers for RA and may show up as much as a decade before symptoms do. Around 60% to 80% of people diagnosed with RA have anti-CCPs. By definition, people with seronegative RA don’t have these antibodies in their blood, though that’s in some dispute. Doctors once used an antibody called rheumatoid factor (RF) to test for seropositivity. Most people with anti-CCPs also have RF, but so do people with lots of other conditions, including infections. That’s why anti-CCP is now the preferred test, though an RF test is often used in conjunction with it for greater accuracy. The Role of Blood Tests in RA Diagnosis No single blood test can reliably diagnose RA. Some healthy people test positive for anti-CCPs, while others who have RA have negative test results. Blood tests are just one of several factors, including a medical history, physical exam and X-rays, that help doctors diagnose the disease. Still, antibodies are a pretty good indicator of RA if you have also have joint pain and swelling and damage to bones and cartilage on imaging tests. Seronegative RA is more challenging and takes longer to diagnose because doctors try to rule out other types of arthritis that aren’t associated with high levels of anti-CCP, such as psoriatic arthritis, gout and spondyloarthritis. Still, seronegative RA remains an imprecise diagnosis, according to some experts. It’s rare for someone who is seronegative to become seropositive, but it’s not uncommon for a diagnosis of seronegative RA to be changed to something else later on.
no
Rheumatoid
Can Rheumatoid Arthritis be diagnosed with a blood test?
yes_statement
"rheumatoid" arthritis can be "diagnosed" with a "blood" "test".. a "blood" "test" can be used to "diagnose" "rheumatoid" arthritis.
https://my.clevelandclinic.org/health/diseases/4924-rheumatoid-arthritis
Rheumatoid Arthritis (RA): Causes, Symptoms & Treatment FAQs
Rheumatoid Arthritis Rheumatoid arthritis is a type of arthritis where your immune system attacks the tissue lining the joints on both sides of your body. It may affect other parts of your body too. The exact cause is unknown. Treatment options include lifestyle changes, physical therapy, occupational therapy, nutritional therapy, medication and surgery. Overview Rheumatoid arthritis is an autoimmune disease that causes symptoms in several body systems. What is rheumatoid arthritis? Rheumatoid arthritis (RA) is an autoimmune disease that is chronic (ongoing). It occurs in the joints on both sides of your body, which makes it different from other types of arthritis. You may have symptoms of pain and inflammation in your: Fingers. Hands. Wrists Knees Ankles. Feet. Toes. Uncontrolled inflammation damages cartilage, which normally acts as a “shock absorber” in your joints. In time, this can deform your joints. Eventually, your bone itself erodes. This can lead to the fusion of your joint (an effort of your body to protect itself from constant irritation). Specific cells in your immune system (your body’s infection-fighting system) aid this process. These substances are produced in your joints but also circulate and cause symptoms throughout your body. In addition to affecting your joints, rheumatoid arthritis sometimes affects other parts of your body, including your: Skin. Eyes. Mouth. Lungs. Heart. Who gets rheumatoid arthritis? Rheumatoid arthritis affects more than 1.3 million people in the United States. It’s 2.5 times more common in people designated female at birth than in people designated male at birth. What’s the age of onset for rheumatoid arthritis? RA usually starts to develop between the ages of 30 and 60. But anyone can develop rheumatoid arthritis. In children and young adults — usually between the ages of 16 and 40 — it’s called young-onset rheumatoid arthritis (YORA). In people who develop symptoms after they turn 60, it’s called later-onset rheumatoid arthritis (LORA). Symptoms and Causes What are the symptoms of rheumatoid arthritis? Rheumatoid arthritis affects everyone differently. In some people, joint symptoms develop over several years. In other people, rheumatoid arthritis symptoms progress rapidly. Many people have time with symptoms (flares) and then time with no symptoms (remission). Does rheumatoid arthritis cause fatigue? Everyone’s experience of rheumatoid arthritis is a little different. But many people with RA say that fatigue is among the worst symptoms of the disease. Living with chronic pain can be exhausting. And fatigue can make it more difficult to manage your pain. It’s important to pay attention to your body and take breaks before you get too tired. What are rheumatoid arthritis flare symptoms? The symptoms of a rheumatoid arthritis flare aren’t much different from the symptoms of rheumatoid arthritis. But people with RA have ups and downs. A flare is a time when you have significant symptoms after feeling better for a while. With treatment, you’ll likely have periods of time when you feel better. Then, stress, changes in weather, certain foods or infections trigger a period of increased disease activity. Although you can’t prevent flares altogether, there are steps you can take to help you manage them. It might help to write your symptoms down every day in a journal, along with what’s going on in your life. Share this journal with your rheumatologist, who may help you identify triggers. Then you can work to manage those triggers. What causes rheumatoid arthritis? The exact cause of rheumatoid arthritis is unknown. Researchers think it’s caused by a combination of genetics, hormones and environmental factors. Normally, your immune system protects your body from disease. With rheumatoid arthritis, something triggers your immune system to attack your joints. An infection, smoking or physical or emotional stress may be triggering. Is rheumatoid arthritis genetic? Scientists have studied many genes as potential risk factors for RA. Certain genetic variations and non-genetic factors contribute to your risk of developing rheumatoid arthritis. Non-genetic factors include sex and exposure to irritants and pollutants. People born with variations in the human leukocyte antigen (HLA) genes are more likely to develop rheumatoid arthritis. HLA genes help your immune system tell the difference between proteins your body makes and proteins from invaders like viruses and bacteria. What are the risk factors for developing rheumatoid arthritis? There are several risk factors for developing rheumatoid arthritis. These include: Family history: You’re more likely to develop RA if you have a close relative who also has it. Sex: Women and people designated female at birth are two to three times more likely to develop rheumatoid arthritis. Smoking:Smoking increases a person’s risk of rheumatoid arthritis and makes the disease worse. Obesity: Your chances of developing RA are higher if you have obesity. Diagnosis and Tests How is rheumatoid arthritis diagnosed? Your healthcare provider may refer you to a physician who specializes in arthritis (rheumatologist). Rheumatologists diagnose people with rheumatoid arthritis based on a combination of several factors. They’ll do a physical exam and ask you about your medical history and symptoms. Your rheumatologist will order blood tests and imaging tests. The blood tests look for inflammation and blood proteins (antibodies) that are signs of rheumatoid arthritis. These may include: About 60% to 70% of people living with rheumatoid arthritis have antibodies to cyclic citrullinated peptides (CCP) (proteins). Your rheumatologist may order imaging tests to look for signs that your joints are wearing away. Rheumatoid arthritis can cause the ends of the bones within your joints to wear down. The imaging tests may include: In some cases, your provider may watch how you do over time before making a definitive diagnosis of rheumatoid arthritis. What are the diagnostic criteria for rheumatoid arthritis? Diagnostic criteria are a set of signs, symptoms and test results your provider looks for before telling you that you’ve got rheumatoid arthritis. They’re based on years of research and clinical practice. Some people with RA don’t have all the criteria. Generally, though, the diagnostic criteria for rheumatoid arthritis include: Inflammatory arthritis in two or more large joints (shoulders, elbows, hips, knees and ankles). Management and Treatment What are the goals of treating rheumatoid arthritis? The most important goal of treating rheumatoid arthritis is to reduce joint pain and swelling. Doing so should help maintain or improve joint function. The long-term goal of treatment is to slow or stop joint damage. Controlling joint inflammation reduces your pain and improves your quality of life. How is rheumatoid arthritis treated? Joint damage generally occurs within the first two years of diagnosis, so it’s important to see your provider if you notice symptoms. Treating rheumatoid arthritis in this “window of opportunity” can help prevent long-term consequences. Treatments for rheumatoid arthritis include lifestyle changes, therapies, medicine and surgery. Your provider considers your age, health, medical history and how bad your symptoms are when deciding on a treatment. What medications treat rheumatoid arthritis? Early treatment with certain drugs can improve your long-term outcome. Combinations of drugs may be more effective than, and appear to be as safe as, single-drug therapy. There are many medications to decrease joint pain, swelling and inflammation, and to prevent or slow down the disease. Medications that treat rheumatoid arthritis include: Non-steroidal anti-inflammatory drugs (NSAIDs) COX-2 inhibitors are another kind of NSAID. They include products like celecoxib (Celebrex®). COX-2 inhibitors have fewer bleeding side effects on your stomach than typical NSAIDs. Corticosteroids Corticosteroids, also known as steroids, also can help with pain and inflammation. They include prednisone and cortisone. Disease-modifying antirheumatic drugs (DMARDs) Unlike other NSAIDs, DMARDs actually can slow the disease process by modifying your immune system. Your provider may prescribe DMARDs alone and in combination with steroids or other drugs. Common DMARDs include: Methotrexate (Trexall®). Hydroxychloroquine (Plaquenil®). Sulfasalazine (Azulfidine®). Leflunomide (Arava®). Janus kinase (JAK) inhibitors JAK inhibitors are another type of DMARD. Rheumatologists often prescribe JAK inhibitors for people who don’t improve taking methotrexate alone. These products include: Biologics If you don’t respond well to DMARDs, your provider may prescribe biologic response agents (biologics). Biologics target the molecules that cause inflammation in your joints. Providers think biologics are more effective because they attack the cells at a more specific level. These products include: Biologics tend to work rapidly — within two to six weeks. Your provider may prescribe them alone or in combination with a DMARD like methotrexate. What is the safest drug for rheumatoid arthritis? The safest drug for rheumatoid arthritis is one that gives you the most benefit with the least amount of negative side effects. This varies depending on your health history and the severity of your RA symptoms. Your healthcare provider will work with you to develop a treatment program. The drugs your healthcare provider prescribes will match the seriousness of your condition. It’s important to meet with your healthcare provider regularly. They’ll watch for any side effects and change your treatment, if necessary. Your healthcare provider may order tests to determine how effective your treatment is and if you have any side effects. Will changing my diet help my rheumatoid arthritis? When combined with the treatments and medications your provider recommends, changes in diet may help reduce inflammation and other symptoms of RA. But it won’t cure you. You can talk with your doctor about adding good fats and minimizing bad fats, salt and processed carbohydrates. No herbal or nutritional supplements, like collagen, can cure rheumatoid arthritis. These dietary changes are safer and most successful when monitored by your rheumatologist. But there are lifestyle changes you can make that may help relieve your symptoms. Your rheumatologist may recommend weight loss to reduce stress on inflamed joints. People with rheumatoid arthritis also have a higher risk of coronary artery disease. High blood cholesterol (a risk factor for coronary artery disease) can respond to changes in diet. A nutritionist can recommend specific foods to eat or avoid to reach a desirable cholesterol level. When is surgery used to treat rheumatoid arthritis? Surgery may be an option to restore function to severely damaged joints. Your provider may also recommend surgery if your pain isn’t controlled with medication. Surgeries that treat RA include: Outlook / Prognosis What is the prognosis (outlook) for people who have rheumatoid arthritis? Although there’s no cure for rheumatoid arthritis, there are many effective methods for decreasing your pain and inflammation and slowing down your disease process. Early diagnosis and effective treatment are very important. What types of lifestyle changes can help with rheumatoid arthritis? Having a lifelong illness like rheumatoid arthritis may make you feel like you don’t have much control over your quality of life. While there are aspects of RA that you can’t control, there are things you can do to help you feel the best that you can. Such lifestyle changes include: Rest When your joints are inflamed, the risk of injury to your joints and nearby soft tissue structures (such as tendons and ligaments) is high. This is why you need to rest your inflamed joints. But it’s still important for you to exercise. Maintaining a good range of motion in your joints and good fitness overall are important in coping with RA. Exercise Pain and stiffness can slow you down. Some people with rheumatoid arthritis become inactive. But inactivity can lead to a loss of joint motion and loss of muscle strength. These, in turn, decrease joint stability and increase pain and fatigue. Regular exercise can help prevent and reverse these effects. You might want to start by seeing a physical or occupational therapist for advice about how to exercise safely. Beneficial workouts include: Range-of-motion exercises to preserve and restore joint motion. Exercises to increase strength. Exercises to increase endurance (walking, swimming and cycling). Frequently Asked Questions What are the early signs of rheumatoid arthritis? Early signs of rheumatoid arthritis include tenderness or pain in small joints like those in your fingers or toes. Or you might notice pain in a larger joint like your knee or shoulder. These early signs of RA are like an alarm clock set to vibrate. It might not always been enough to get your attention. But the early signs are important because the sooner you’re diagnosed with RA, the sooner your treatment can begin. And prompt treatment may mean you are less likely to have permanent, painful joint damage. What is early stage rheumatoid arthritis? Providers sometimes use the term “early rheumatoid arthritis” to describe the condition in people who’ve had symptoms of rheumatoid arthritis for fewer than six months. What are the four stages of rheumatoid arthritis? Stage 1: In early stage rheumatoid arthritis, the tissue around your joint(s) is inflamed. You may have some pain and stiffness. If your provider ordered X-rays, they wouldn’t see destructive changes in your bones. Stage 2: The inflammation has begun to damage the cartilage in your joints. You might notice stiffness and a decreased range of motion. Stage 3: The inflammation is so severe that it damages your bones. You’ll have more pain, stiffness and even less range of motion than in stage 2, and you may start to see physical changes. What’s the normal sed rate for rheumatoid arthritis? Sed rate (erythrocyte sedimentation rate, also known as ESR) is a blood test that helps detect inflammation in your body. Your healthcare provider may also use this test to watch how your RA progresses. Normal sed rates are as follows: People designated male at birth Erythrocyte sedimentation rate < 50 years old ≤ 15 mm/hr > 50 years old ≤ 20 mm/hr People designated female at birth < 50 years old ≤ 20 mm/hr > 50 years old ≤ 30 mm/hr In rheumatoid arthritis, your sed rate is likely higher than normal. To take part in clinical trials related to rheumatoid arthritis, you usually need an ESR of ≥ 28 mm/hr. With treatment, your sed rate may decrease. If you reach the normal ranges listed above, you may be in remission. What is the difference? Rheumatoid arthritis vs. osteoarthritis Rheumatoid arthritis and osteoarthritis are both common causes of pain and stiffness in joints. But they have different causes. In osteoarthritis, inflammation and injury break down your cartilage over time. In rheumatoid arthritis, your immune system attacks the lining of your joints. Is rheumatoid arthritis a disability? The Americans with Disabilities Act (ADA) says that a disability is a physical or mental impairment that limits one or more major life activity. If RA impacts your ability to function, you may qualify for disability benefits from the Social Security Administration. Can rheumatoid arthritis go away? No, rheumatoid arthritis doesn’t go away. It’s a condition you’ll have for the rest of your life. But you may have periods where you don’t notice symptoms. These times of feeling better (remission) may come and go. That said, the damage RA causes in your joints is here to stay. If you don’t see a provider for RA treatment, the disease can cause permanent damage to your cartilage and, eventually, your joints. RA can also harm organs like your lung and heart. A note from Cleveland Clinic If you have rheumatoid arthritis, you may feel like you’re on a lifelong roller coaster of pain and fatigue. It’s important to share these feelings and your symptoms with your healthcare provider. Along with X-rays and blood tests, what you say about your quality of life will help inform your treatment. Your healthcare provider will assess your symptoms and recommend the right treatment plan for your needs. Most people can manage rheumatoid arthritis and still do the activities they care about.
’re more likely to develop RA if you have a close relative who also has it. Sex: Women and people designated female at birth are two to three times more likely to develop rheumatoid arthritis. Smoking:Smoking increases a person’s risk of rheumatoid arthritis and makes the disease worse. Obesity: Your chances of developing RA are higher if you have obesity. Diagnosis and Tests How is rheumatoid arthritis diagnosed? Your healthcare provider may refer you to a physician who specializes in arthritis (rheumatologist). Rheumatologists diagnose people with rheumatoid arthritis based on a combination of several factors. They’ll do a physical exam and ask you about your medical history and symptoms. Your rheumatologist will order blood tests and imaging tests. The blood tests look for inflammation and blood proteins (antibodies) that are signs of rheumatoid arthritis. These may include: About 60% to 70% of people living with rheumatoid arthritis have antibodies to cyclic citrullinated peptides (CCP) (proteins). Your rheumatologist may order imaging tests to look for signs that your joints are wearing away. Rheumatoid arthritis can cause the ends of the bones within your joints to wear down. The imaging tests may include: In some cases, your provider may watch how you do over time before making a definitive diagnosis of rheumatoid arthritis. What are the diagnostic criteria for rheumatoid arthritis? Diagnostic criteria are a set of signs, symptoms and test results your provider looks for before telling you that you’ve got rheumatoid arthritis. They’re based on years of research and clinical practice. Some people with RA don’t have all the criteria.
yes
Rheumatoid
Can Rheumatoid Arthritis be diagnosed with a blood test?
yes_statement
"rheumatoid" arthritis can be "diagnosed" with a "blood" "test".. a "blood" "test" can be used to "diagnose" "rheumatoid" arthritis.
https://www.everydayhealth.com/arthritis/psoriatic-arthritis/12-medical-tests-psoriatic-arthritis/
12 Medical Tests for Psoriatic Arthritis, Explained
X-rays can detect problems like bone erosion, a sign that psoriatic arthritis is getting worse.iStock Few people like being poked and prodded, but if you’re living with psoriatic arthritis, regular medical testing is vital to keeping healthy. Psoriatic arthritis tests not only help your doctor diagnose the autoimmune condition but are also important for monitoring the disease’s progression, as well as managing psoriatic arthritis symptoms like painful joint inflammation. Diagnosing psoriatic arthritis can be challenging because there’s no single test, says Magdalena Cadet, MD, a clinical rheumatologist and adjunct assistant professor in the department of medicine at New York University's Grossman School of Medicine in New York City. Along with a physical exam, you’ll likely need a series of both imaging procedures and blood tests in order to make a psoriatic arthritis diagnosis, as well as rule out other forms of arthritis, such as rheumatoid arthritis or gout. “Many types of arthritis are associated with inflammation, and it’s important to distinguish between them in order to initiate a treatment plan,” Dr. Cadet explains. Here are 12 key medical tests that can help detect and monitor psoriatic arthritis. 1. Psoriatic Arthritis Imaging Test: X-Ray X-rays, which use low-dose radiation to produce images of the inside of the body, can help your doctor make a psoriatic arthritis diagnosis and monitor progression of the autoimmune condition. “X-rays allow the doctor to see changes to the bone,” says Elyse Rubenstein, MD, a rheumatologist in Santa Monica, California. In people with psoriatic arthritis, X-rays may show bone erosion, new bone formation, bone fusion, or a phenomenon called “pencil in cup,” in which the ends of the bone have been eroded to a pencillike point. Any of these changes indicate that the disease is getting worse, Dr. Rubenstein says. Frequency of Testing A doctor may take an initial X-ray to help diagnose psoriatic arthritis and rule out other forms of arthritis (such as rheumatoid arthritis) that have different patterns of joint involvement, says Rubenstein. After that, how often you have X-rays depends on your physician and the state of your disease. Some doctors take X-rays just once a year for routine monitoring, while others may take them only when a patient’s condition changes. Early treatment is key to easing discomfort and heading off complications. 2. Psoriatic Arthritis Imaging Test: MRI “If the X-rays don’t show inflammation, and the doctor wants more evidence, they may do an MRI,” Rubenstein says. That’s because MRIs are more detailed than X-rays. This noninvasive imaging technique uses a magnetic field and computer-generated radio waves to create detailed three-dimensional images. During an MRI, you lie inside a machine (typically a large tube-shaped magnet) and remain very still while the device moves a strong magnetic field, then radio waves, through your body to excite protons (subatomic particles) found in the water that makes up human tissue, according to the National Institute of Biomedical Imaging and Bioengineering. The procedure is painless and, unlike X-ray imaging, does not emit radiation. A radiologist analyzes the MRI, then reports back to the rheumatologist. Inflammation, swelling, and bone erosion all indicate that psoriatic arthritis is active, notes Rubenstein. Frequency of Testing A doctor may order an MRI during initial testing to help with making a psoriatic arthritis diagnosis, as well as later to monitor the disease or look for any changes in a patient’s psoriatic arthritis symptoms. 3. Psoriatic Arthritis Blood Test: Erythrocyte Sedimentation Rate Erythrocyte sedimentation rate, or ESR or sed rate, is a blood test that measures inflammation in the body, which helps determine a psoriatic arthritis diagnosis, explains Elaine Husni, MD, MPH, vice chair of rheumatology and director of the Arthritis and Musculoskeletal Center at the Cleveland Clinic. The test measures how many milliliters of red blood cells settle per hour in a vial of blood. When swelling and inflammation are present, the blood’s proteins clump together and become heavier; as a result, they will fall and settle faster at the bottom of the test tube, according to Johns Hopkins Medicine. As with many blood tests, labs each have their own, slightly different reading of what ESR numbers mean, which they interpret based on past results, explains Cadet. Age is also a factor. “ESR can be elevated slightly in elderly patients and still be normal for that person,” she says. Frequency of Testing In addition to diagnosis, testing may be done several times a year to determine if there’s ongoing inflammation, says Cadet. Editor’s Picks Know the risks and the simple steps you can take to safeguard your health.…Learn More 4. Psoriatic Arthritis Blood Test: C-Reactive Protein C-reactive protein (CRP) is a protein in the blood that indicates inflammation. If a blood test shows high CRP levels, you might have psoriatic arthritis, explains Dr. Husni. “Your doctor may use the test if your ESR is normal, since CRP is more accurate at detecting inflammation in some people,” adds Cadet. Again, different labs may have slightly different interpretations of readings. Frequency of TestingCRP analysis may be done for diagnosis and then several times a year to assess whether inflammation has responded to treatment, notes Cadet. 5. Psoriatic Arthritis Blood Test: Rheumatoid Factor Rheumatoid factor (RF), a protein produced by the immune system that can be a marker for autoimmune dysfunction, is sometimes an indication of systemic inflammation. Although RF is mostly associated with rheumatoid arthritis, it can also occur in a small percentage of people with psoriatic arthritis, says Rubenstein. To distinguish the two conditions, doctors will look at RF levels in the context of other factors, such as a certain pattern of joint involvement and symptoms of psoriasis, which can accompany psoriatic arthritis. Frequency of Testing This is usually done only at the initial diagnostic appointment, says Rubenstein. Blood tests that look for the presence of anti-cylic citrullinated peptide antibodies (anti-CCPs), which are inflammatory, are commonly used to diagnose rheumatoid arthritis, but anti-CCPs can also indicate psoriatic arthritis. Roughly 8 to 16 percent of people with psoriatic arthritis will test positive for anti-CCPs, says Rubenstein. Frequency of Testing The anti-CCP test is typically done in a patient’s initial evaluation. 7. Psoriatic Arthritis Blood Test: HLA-B27 HLA-B27 is a blood test that looks for a genetic marker for psoriatic arthritis — a protein called human leukocyte antigen B27 (HLA-B27), which is located on the surface of white blood cells. About 20 percent of people with psoriatic arthritis are positive for HBL-B27, according to CreakyJoints, although other studies have shown the percentage could be twice that. HLA-B27 is associated with a larger group of autoimmune diseases, called spondyloarthropathies, which includes psoriatic arthritis, Cadet says. These conditions can cause inflammation in the enthesis (the area where bone and tendons meet) anywhere in the body, but mainly in the spine. If untreated over a long period, this inflammation may cause the destruction of cartilage, muscle spasms, and a decrease in bone mineral density that may lead to osteopenia or osteoporosis. Frequency of Testing “The HLA-B27 test is usually performed only at an initial visit to help establish a diagnosis,” says Cadet. 8. Psoriatic Arthritis Skin and Blood Tests: Tuberculosis Test Tuberculosis (TB) is a bacterial infection that typically affects the lungs but can also reach bones, joints, and kidneys. Symptoms include fever, night sweats, chills, coughing, weight loss, and fatigue. People with psoriatic arthritis must have a negative TB test before they can take biologic medications, which are protein-based drugs given by injection or infusion. By suppressing the immune system, these medications may reactivate latent (inactive) tuberculosis. There are two kinds of TB tests: a skin test and a blood test. The skin test involves injecting a small amount of a protein called tuberculin into the skin of the lower arm, then checking the area around 48 to 72 hours later to see if there has been a reaction. The result depends on the size of the raised, hard area or swelling, according to the Centers for Disease Control and Prevention (CDC). A TB blood test assesses whether the body has launched an immune response to the presence of M. tuberculosis bacteria. The test is done in a lab after a blood sample is drawn. Frequency of Testing Doctors order a TB test before prescribing biologics and may repeat testing annually as long as a patient is taking the medication, says Cadet. She adds, “Any patient who exhibits symptoms or has been exposed to TB should have an immediate TB test.” Here is some important information to help you live your best life with psoriatic arthritis. 9. Psoriatic Arthritis Imaging Test: Chest X-Ray Doctors often order a chest X-ray in conjunction with a TB test to increase the chance of detecting infection, says Cadet. “The X-ray may show scarring from prior exposure to TB, or if there’s an active or new infection,” she explains. Frequency of Testing As with the TB skin test, doctors may order a chest X-ray prior to prescribing biologics, repeating the test annually as long as the patient is taking the medication, says Cadet. 10. Psoriatic Arthritis Blood Test: Serum Uric Acid Uric acid is a substance that forms when the body breaks down purines, which are found in human cells and many foods, according to the Arthritis Foundation. Elevated blood levels of uric acid are sometimes identified in people with psoriatic arthritis and can also be linked to gout, heart disease, and high blood pressure, according to Cadet. Frequency of TestingTesting may be done several times a year, says Cadet. 11. Psoriatic Arthritis Imaging Test: Bone Mineral Density The most common bone mineral density test is called DXA (also abbreviated DEXA), for dual-energy X-ray absorptiometry. This test uses X-rays to measure how many grams of calcium and other bone minerals are packed into a segment of bone. The denser the bones, the stronger and healthier they are. Unfortunately, common psoriatic arthritis medications — such as prednisone, a corticosteroid — can weaken bones over time and increase the risk of osteoporosis. “And psoriatic arthritis itself is associated with a decrease in bone mineral density,” notes Rubenstein. If you’re diagnosed with osteopenia, a condition involving weakened bones that may lead to osteoporosis, your doctor will discuss medications that can slow or stop bone loss, and may recommend calcium and vitamin D supplements along with resistance exercise, says Rubenstein. Frequency of Testing “Bone density screening is done during menopause and every one to two years after that,” says Rubenstein. “If a patient is on prednisone or other medications that decrease bone mineral density, the test may be done earlier and repeated every one to two years.” 12. Psoriatic Arthritis Blood Test: Anemia When you have psoriatic arthritis, ongoing inflammation may cause anemia, a decrease in healthy red blood cells that can lead to dizziness, shortness of breath, and exhaustion, says Cadet. By measuring your blood levels of hemoglobin (the pigmented, oxygen-carrying component of red blood cells), your doctor can determine if you have anemia. A normal reading for women is 11.6 to 15 grams of hemoglobin per deciliter of blood, and 13.2 to 16.6 grams is normal for men, according to the Mayo Clinic. If blood work reveals anemia, your doctor will give you an exam and other blood tests to find the cause. In people with psoriatic arthritis, treatments that reduce inflammation also help with anemia, explains Cadet. Frequency of Testing Doctors may order tests to be done several times a year to see if the anemia has worsened or improved.
The test measures how many milliliters of red blood cells settle per hour in a vial of blood. When swelling and inflammation are present, the blood’s proteins clump together and become heavier; as a result, they will fall and settle faster at the bottom of the test tube, according to Johns Hopkins Medicine. As with many blood tests, labs each have their own, slightly different reading of what ESR numbers mean, which they interpret based on past results, explains Cadet. Age is also a factor. “ESR can be elevated slightly in elderly patients and still be normal for that person,” she says. Frequency of Testing In addition to diagnosis, testing may be done several times a year to determine if there’s ongoing inflammation, says Cadet. Editor’s Picks Know the risks and the simple steps you can take to safeguard your health.…Learn More 4. Psoriatic Arthritis Blood Test: C-Reactive Protein C-reactive protein (CRP) is a protein in the blood that indicates inflammation. If a blood test shows high CRP levels, you might have psoriatic arthritis, explains Dr. Husni. “Your doctor may use the test if your ESR is normal, since CRP is more accurate at detecting inflammation in some people,” adds Cadet. Again, different labs may have slightly different interpretations of readings. Frequency of TestingCRP analysis may be done for diagnosis and then several times a year to assess whether inflammation has responded to treatment, notes Cadet. 5. Psoriatic Arthritis Blood Test: Rheumatoid Factor Rheumatoid factor (RF), a protein produced by the immune system that can be a marker for autoimmune dysfunction, is sometimes an indication of systemic inflammation. Although RF is mostly associated with rheumatoid arthritis, it can also occur in a small percentage of people with psoriatic arthritis, says Rubenstein.
yes
Rheumatoid
Can Rheumatoid Arthritis be diagnosed with a blood test?
yes_statement
"rheumatoid" arthritis can be "diagnosed" with a "blood" "test".. a "blood" "test" can be used to "diagnose" "rheumatoid" arthritis.
https://creakyjoints.org/living-with-arthritis/symptoms/what-is-seronegative-arthritis/
Seronegative Rheumatoid Arthritis: What It Is and How to Treat It
What Exactly Is Seronegative Rheumatoid Arthritis? Key Facts You Need to Know PUBLISHED 11/15/18 BY Marissa Laliberte When diagnosing and treating RA, blood tests aren’t everything. There are two main types of rheumatoid arthritis (RA) in adults: seropositive and seronegative. Both have the same symptoms — joint pain, morning stiffness, fatigue, fever, low appetite — but the primary difference is in the bloodwork. In most people diagnosed with RA, blood tests reveal abnormally high levels of antibodies called rheumatoid factor (RF) and anti-cyclic citrullinated peptide (anti-CCP), which signal that the immune system is in overdrive and may be attacking healthy tissues instead of just foreign invaders like germs. The majority of rheumatoid arthritis patients are seropositive: 50 percent to 70 percent of RA patients have anti-CCP antibodies and 65 percent to 80 percent have rheumatoid factor antibodies, research shows. However, this means that a sizeable number of people with RA are considered to be seronegative, which means they don’t have either of these antibodies in their blood. Keep in mind that blood tests are just one part of the process that doctors use to diagnose RA. Learn more about different tests that diagnose RA here. How Are Blood Tests Used to Help Diagnose RA? RF and anti-CCP tests don’t definitively point to RA because some healthy people without RA test positive for these antibodies, while other people who do have autoimmune problems test negative, says Umbreen Hasan, MD, consultant rheumatologist for Allina Health in Minnesota. That’s why doctors will also consider RA symptoms, inflammation levels, and the amount of joint swelling with the help of X-rays and ultrasounds. “Although blood tests for inflammatory arthritis can help in the diagnosis of the condition, a good history and physical examination is more important,” says Dr. Hasan. “The diagnosis [of RA] should not be solely based on blood tests.” However, if you have symptoms that are consistent with rheumatoid arthritis and you do test positive for these antibodies, your doctor will feel pretty confident being able to diagnose you with RA. How Do Doctors Diagnose Seronegative RA? People who don’t test positive for the presence of RF and anti-CCP can still be diagnosed with rheumatoid arthritis based on their symptoms, a physical exam of their joints, and imaging tests (X-rays and ultrasounds) that can show patterns of cartilage and bone deterioration. Interestingly, some people who initially test seronegative develop those RF and anti-CCP antibodies later. Among people with more established RA, the percentage of seropositive patients rises to 80 to 85 percent, says Konstantinos Loupasakis, MD, rheumatologist with MedStar Washington Hospital Center. But most people with seronegative RA never develop antibodies and become seropositive. Because doctors feel less confident diagnosing RA without positive blood tests, they’ll need to rule out other conditions like viral infections, gout, or spondyloarthritis (an umbrella term for conditions such as psoriatic arthritis and reactive arthritis that isn’t associated with high levels of RF and anti-CCP), says Dr. Loupasakis. “We want to be very careful that we are not missing something,” says Dr. Loupasakis. “There are diseases that can camouflage as rheumatoid arthritis, and [your symptoms] might be something else.” This may help explain why research shows that people with seronegative RA often take longer to get diagnosed and to start treatment than people with seropositive RA, according to study presented at the 2017 annual meeting of the American College of Rheumatology. Is Seronegative RA Just Another Kind of Arthritis? But a seronegative test doesn’t automatically point to spondyloarthritis, which is a separate condition, he says. The two types of inflammatory arthritis affect the joints differently, confirms Dr. Hasan. While rheumatoid arthritis generally hits small joints like the hands and feet, spondyloarthritis is more likely to start in the lower back or shoulders. Getting the wrong diagnosis can keep patients from the best treatment. While spondyloarthritis has its own approved set of treatments, seropositive and seronegative RA are treated the same way. Both use disease-modifying anti-rheumatic drugs (DMARDs), biologics, corticosteroids, and anti-inflammatory NSAID painkillers like aspirin. The primary difference is that rituximab, an infused medication, is only effective for seropositive patients, though it’s not among the first treatments a doctor will prescribe anyway, says Dr. Loupasakis. Seronegative vs. Seropositive RA: Are There Other Differences? Past studies seemed to indicate that seropositive RA patients had a worse prognosis and more severe disease progression than seronegative RA patients, according to MedPage Today. This has created a certain stigma around seronegative RA — that it is a “less severe disease” and perhaps even requires less aggressive treatment. However, the thinking here is changing based on newer research. For example, a Dutch study found that seronegative RA patients had significantly greater disease activity and worse functional ability than seropositive patients; on the other hand, seropositive patients had greater joint damage. A Canadian study found that measures of RA disease activity (such as number of swollen/tender joints or X-ray evidence of joint damage) was higher in seronegative patients than in seropositive patients when the study began. Both seronegative and seropositive patients received similar treatment. When measured again after two years, the seronegative RA patients had a significantly greater improvement in several measures of disease activity and less erosion than those with seropositive disease. Part of the problem may be the delay in diagnosis. Because people with seronegative RA take longer to get diagnosed and start disease-modifying medication, they may be missing a crucial window to prevent progression and enter remission. Understanding the differences between seropositive and seronegative patients, as well as nuances within each of those groups, is an ongoing area of study. Both seronegative and seropositive RA likely have different subtypes that haven’t yet been teased out. Personalizing treatment and being able to better predict which patients will do better on which kinds of treatment is a hot topic in the field of rheumatology. Bottom line, according to MedPage: “RA patients classified as seronegative may indeed experience a level of disease activity that is as severe, or more severe, than patients who are seropositive, and thus may benefit from the type of aggressive treatment strategies that are more routinely used to treat seropositive patients.” When People Say Seronegative RA ‘Isn’t Real’ Kate Mitchell of Boston knows all too well the importance of getting the right diagnosis. Her rheumatologist first thought she had psoriatic arthritis because of a family history of psoriasis. Realizing she’d only ever had two psoriasis flare-ups, her doctor suggested she might have seronegative rheumatoid arthritis instead. Those medications didn’t work as well as they’d hoped, so he put her back on medication to treat psoriatic arthritis, but her symptoms got even worse, and she developed endometriosis. She finally found relief when she went back to RA medications. “I’m not spending my entire life in bed or on the couch,” she says. “I can leave my house for things other than a rheumatology appointment.” Mitchell had a good experience with her rheumatologist but says she’s run into other doctors who have tried convincing her seronegative RA isn’t real or that she has a different type of arthritis. She tries to remind herself that she knows her own body, but “other times it’s upsetting and demoralizing,” she says. Mitchell encourages patients to keep up with doctors’ appointments to find a diagnosis — whether it’s seronegative RA or something else. “There are so many illnesses and forms of arthritis that do not have a definite test to diagnose them,” she says. “Just because one doctor or one rheumatologist says you do not have x form of arthritis doesn’t mean you don’t have any of the other 100 forms.” Browse More Subscribe About CreakyJoints CreakyJoints is a digital community for millions of arthritis patients and caregivers worldwide who seek education, support, advocacy, and patient-centered research. We present patients through our popular social media channels, our website CreakyJoints.org, and the 50-State Network, which includes nearly 1,500 trained volunteer patient, caregiver and healthcare activists. About CreakyJoints CreakyJoints is a digital community for millions of arthritis patients and caregivers worldwide who seek education, support, advocacy, and patient-centered research. We represent patients through our popular social media channels, our website CreakyJoints.org, and the 50-State Network, which includes nearly 1,500 trained volunteer patient, caregiver and healthcare activists.
What Exactly Is Seronegative Rheumatoid Arthritis? Key Facts You Need to Know PUBLISHED 11/15/18 BY Marissa Laliberte When diagnosing and treating RA, blood tests aren’t everything. There are two main types of rheumatoid arthritis (RA) in adults: seropositive and seronegative. Both have the same symptoms — joint pain, morning stiffness, fatigue, fever, low appetite — but the primary difference is in the bloodwork. In most people diagnosed with RA, blood tests reveal abnormally high levels of antibodies called rheumatoid factor (RF) and anti-cyclic citrullinated peptide (anti-CCP), which signal that the immune system is in overdrive and may be attacking healthy tissues instead of just foreign invaders like germs. The majority of rheumatoid arthritis patients are seropositive: 50 percent to 70 percent of RA patients have anti-CCP antibodies and 65 percent to 80 percent have rheumatoid factor antibodies, research shows. However, this means that a sizeable number of people with RA are considered to be seronegative, which means they don’t have either of these antibodies in their blood. Keep in mind that blood tests are just one part of the process that doctors use to diagnose RA. Learn more about different tests that diagnose RA here. How Are Blood Tests Used to Help Diagnose RA? RF and anti-CCP tests don’t definitively point to RA because some healthy people without RA test positive for these antibodies, while other people who do have autoimmune problems test negative, says Umbreen Hasan, MD, consultant rheumatologist for Allina Health in Minnesota. That’s why doctors will also consider RA symptoms, inflammation levels, and the amount of joint swelling with the help of X-rays and ultrasounds.
no
Rheumatoid
Can Rheumatoid Arthritis be diagnosed with a blood test?
yes_statement
"rheumatoid" arthritis can be "diagnosed" with a "blood" "test".. a "blood" "test" can be used to "diagnose" "rheumatoid" arthritis.
https://www.versusarthritis.org/about-arthritis/conditions/rheumatoid-arthritis/
Rheumatoid arthritis | Causes, symptoms, treatments
What is rheumatoid arthritis? Watch our video about what rheumatoid arthritis is Click to watch our short animation to find out what rheumatoid arthritis is, what the treatment options are and what you can do to help yourself. Rheumatoid arthritis is a condition that can cause pain, swelling and stiffness in joints. It is what is known as an auto-immune condition. This means that the immune system, which is the body’s natural self-defence system, gets confused and starts to attack your body’s healthy tissues. In rheumatoid arthritis, the main way it does this is with inflammation in your joints. Rheumatoid arthritis affects around 400,000 adults aged 16 and over in the UK. It can affect anyone of any age. It can get worse quickly, so early diagnosis and intensive treatment are important. The sooner you start treatment, the more effective it’s likely to be. To understand how rheumatoid arthritis develops, it helps to understand how a normal joint works. How does a normal joint work? A joint is where two bones meet. Most of our joints are designed to allow the bones to move in certain directions and within certain limits. For example, the knee is the largest joint in the body and one of the most complicated. It must be strong enough to take our weight and must lock into position, so we can stand upright. It also has to act as a hinge, so we can walk, and needs to twist and turn when we run or play sports. The end of each bone is covered with cartilage that has a very smooth, slippery surface. The cartilage allows the ends of the bones to move against each other, almost without rubbing. The joint is held in place by the synovium, which contains thick fluid to protect the bones and joint. The synovium has a tough outer layer that holds the joint in place and stops the bones moving too far. Strong cords called tendons anchor the muscles to the bones. What happens in a joint affected by rheumatoid arthritis? If you have rheumatoid arthritis, your immune system can cause inflammation inside a joint or a number of joints. Inflammation is normally an important part of how your immune system works. It allows the body to send extra fluid and blood to a part of the body under attack from an infection. For example, if you have a cut that gets infected, the skin around it can become swollen and a different colour. However, in the case of rheumatoid arthritis, this inflammation in the joint is unnecessary and causes problems. When the inflammation goes down, the capsule around the synovium remains stretched and can’t hold the joint in its proper position. This can cause the joint to become unstable and move into unusual positions. Symptoms stiffness, especially first thing in the morning or after sitting still for a long time. Other symptoms can include: tiredness and lack of energy – this can be known as fatigue a poor appetite (not feeling hungry) weight loss a high temperature, or a fever sweating dry eyes – as a result of inflammation chest pain – as a result of inflammation. Rheumatoid arthritis can affect any joint in the body, although it is often felt in the small joints in the hands and feet first. Both sides of the body are usually affected at the same time, in the same way, but this doesn’t always happen. A few people develop fleshy lumps called rheumatoid nodules, which form under the skin around affected joints. They can sometimes be painful, but usually are not. Causes The following can play a part in why someone has rheumatoid arthritis: Age Rheumatoid arthritis affects adults of any age, although most people are diagnosed between the ages of 40 and 60. Around three-quarters of people with rheumatoid arthritis are of working age when they are first diagnosed. Sex Rheumatoid arthritis is two to three times more common among women than men. Genetics Rheumatoid arthritis develops because of a combination of genetic and environmental factors, such as smoking and diet. It is unclear what the genetic link is, but it is thought that having a relative with the condition increases your chance of developing the condition. Weight If you are overweight, you have a significantly greater chance of developing rheumatoid arthritis than if you are a healthy weight. The body mass index (BMI) is a measure that calculates if your weight is healthy, using your height and weight. How will rheumatoid arthritis affect me? Because rheumatoid arthritis can affect different people in different ways, we can’t predict how the condition might develop for you. If you smoke, it’s a very good idea to quit after a diagnosis of rheumatoid arthritis. This is because: rheumatoid arthritis may be worse in smokers than non-smokers smoking can weaken how well your medication works. Physical activity is also important, as it can improve your symptoms and benefit your overall health. Blood tests and x-rays will help your doctor assess how fast your arthritis is developing and what the outlook for the future may be. This will also help your doctor to decide which form of treatment to recommend. The outlook for people with rheumatoid arthritis is improving all the time, as new and more effective treatments become available. It is possible to lead a full and active life with the condition, but it is important to take your medication as prescribed and make necessary lifestyle changes. Diagnosis A diagnosis of rheumatoid arthritis is based on your symptoms, a physical examination and the results of x-rays, scans and blood tests. It can be difficult to diagnose because there isn't a test that can prove you definitely have it. There are also quite a few conditions that have the same symptoms. Your doctor will ask about your symptoms and do a physical examination. They will look for swollen joints and check how well your joints move. Rheumatoid arthritis can affect different parts of your body at once, so it's important to tell your doctor about all the symptoms you've had, even if they don't seem to be related. If they think you have rheumatoid arthritis, they will refer you to a rheumatologist and may arrange blood tests to help confirm a diagnosis. Blood tests There's no single blood test that can confirm you have rheumatoid arthritis. However, there are a few tests that can show possible signs of the condition. Some of the main tests are outlined below. Erythrocyte sedimentation rate (ESR) A sample of your red blood cells are put into a test tube of liquid. The cells are timed to see how long they take to get to the bottom of the tube. If the cells sink faster than usual, you may have levels of inflammation that are higher than normal. Rheumatoid arthritis is just one possible cause. C-reactive protein (CRP) This test can show if there is inflammation in your body. It does this by checking how much CRP there is in your blood. If there is more CRP than usual, you may have inflammation in your body. Full blood count A full blood count measures the number of red blood cells you have. These carry iron around your body, and a low number of red blood cells means you have a low iron content. This may mean you have anaemia (an-ee-me-er) and is common in people with RA, although having anaemia doesn't prove you have RA. Rheumatoid factor and anti-CCP antibodies About half of all people with rheumatoid arthritis have rheumatoid factor in their blood when the condition starts. However, around 1 in every 20 people without rheumatoid arthritis also test positive for rheumatoid factor. There is another antibody test called anti-CCP that you can take. People who test positive for anti-CCP are very likely to get rheumatoid arthritis. However, not everyone that has the condition has this antibody. Scans Scans may be used to check for joint inflammation and damage. These can be used to diagnose rheumatoid arthritis and to check how the condition is developing. These may include: x-rays – these will show any changes in your joints ultrasound scans – a picture of your joints is created using high-frequency sound waves magnetic resonance imaging (MRI) scans – pictures of your joints are produced using strong magnetic fields and radio waves. Many people with rheumatoid arthritis need to take more than one drug. This is because different drugs work in different ways. Your drug treatments may be changed from time to time. This can depend on how bad your symptoms are, or because something relating to your condition has changed. Drugs may be available under several different names. Each drug has an approved name – sometimes called a generic name. Manufacturers often give their own brand or trade name to the drug as well. For example, Nurofen is a brand name for ibuprofen. The approved name should always be on the pharmacist’s label, even if a brand name appears on the packaging. Check with your doctor, rheumatology nurse specialist or pharmacist if you’re not sure about anything. Painkillers Painkillers can help to relieve the pain caused by rheumatoid arthritis, but should not be the only treatment used. There are many types and strengths of painkillers available – some can be bought over the counter from a pharmacy, while some are only available on prescription. For guidance, ask a healthcare professional in charge of your care. Disease-modifying anti-rheumatic drugs (DMARDs) There are three types of DMARD: conventional synthetic DMARDs (sometimes called csDMARDs) biological therapies (sometimes called bDMARDs). targeted synthetic DMARDs (sometimes called tsDMARDS). You will need to have regular blood tests if you take DMARDs, as they can affect your liver. It may be a while before you notice your DMARD working – possibly a few months. It is important to keep taking your medication during this time. The table below shows the DMARDs available for the treatment of rheumatoid arthritis. Managing symptoms Managing a flare-up When your symptoms get worse, this is known as a flare-up. These can happen at any time, but can happen after you have been stressed or had an infection. Over time, you may get better at noticing the early signs of a flare-up. If you’re having regular flare-ups, you should mention this to your doctor. It may be that you need to review your treatment. Here are a few things you can do to help yourself during a flare-up: Keep taking your medication at the doses you’ve been prescribed. Do gentle exercises. Put heated items on the joint – these can include a hot water bottle or electric heat pad. See below for more information. Put cold items on the joint – these can include a bowl of cold water with ice cubes, a pack of frozen peas wrapped in a towel, or a damp towel that has been kept in the fridge. See below for more information. Let people around you know, so they can help and support you. Tips for using heated items Heated items that could help your joint pain include a hot water bottle or electric heat pad. Wrap these in a towel, then place on a painful joint. You could also try having a hot or warm shower or bath. Other heated items that people have found useful are wheat bag, heat pads, deep heat cream, or a heat lamp. Make sure these items are warm but not hot, as you could risk burning or scalding yourself. Gentle heat will be enough. A towel should be placed between the heated item and the skin for protection. Check your skin regularly, to make sure it is not burning. Tips for using ice packs Some people find that using an ice pack can help their joint pain. You can buy one from a pharmacy, or you can make one at home, by wrapping ice cubes in a plastic bag or wet tea towel. Here’s how to apply the ice to your skin: Rub a small amount of oil over where you’d like the ice pack to go. Any type of oil can be used. If your skin is broken – for example, if you have a cut – don’t use the oil and cover the area with a plastic bag. This will stop the cut getting wet. Put a cold, wet flannel over the oil. Put the ice pack over the flannel and hold it there. After five minutes, check the colour of your skin. Remove the ice pack if your skin has turned bright pink or red. If it hasn’t, leave it on for another 5 to 10 minutes. You can leave the ice pack on for 20-30 minutes. Don’t leave it on for any longer, as you could damage your skin if it is left on for too long. Physical activity You may find it difficult to be physically active in the first place, especially if you are having a flare-up. However, if you find the right activities, help and support, you can be active in a way that suits you. Not keeping active can lead to stiff joints and weak muscles. It could also cause you to gain weight. If you are new to exercise, or haven’t exercised in some time, you may feel a bit sore the first few times you try a new activity. As you get used to it, this will get better. However, if a type of exercise always causes a flare-up, it's probably best to find another one. High-impact exercises such as step exercises, or contact sports, such as rugby and football, are more likely to cause problems. Swimming, walking, gentle cycling and aqua aerobics generally put less strain on your joints. Yoga and tai chi are generally thought to be suitable for those with rheumatoid arthritis. However, there are many different styles, so it is best to check the style is suitable for your condition before you sign up to a class. You should also break up long periods of sitting with light activity, to avoid being sedentary for extended periods. Physiotherapy A physiotherapist can suggest suitable exercises for you and support you in keeping active. People with rheumatoid arthritis should have access to specialist physiotherapy to help manage their condition and improve their fitness, flexibility and strength. You should also have follow-up reviews. Hydrotherapy You may also find that hydrotherapy helps to ease your symptoms. This involves doing special exercises in a warm-water pool, under the supervision of a trained physiotherapist. Hydrotherapy can also be called ‘aquatic therapy’ or ‘aquatic physiotherapy’. Any member of your healthcare team should be able to refer you to an NHS physiotherapist if they think you might benefit from hydrotherapy. In some parts of the UK, you can also refer yourself to a physiotherapist, who will assess whether hydrotherapy would be suitable for you. Check with your GP or call your local rheumatology department to find out if an NHS physiotherapist in your area will accept self-referrals. You can also choose to use private healthcare, but it’s important to be aware that in rare instances, private hydrotherapy may be unregulated, and so the quality of the changing areas, the water or general environment can vary. It’s also recommended that you see someone who’s a member of the Chartered Society of Physiotherapists (CSP) and who’s accredited by the Aquatic Therapy of Chartered Physiotherapists (ATACP). It can help to improve the pain in your joints, and you may also find it relaxing. Ask your doctor or physiotherapist if they think hydrotherapy would be suitable for you. Foot problems Foot problems for those with rheumatoid arthritis include: pain soreness warmth and swelling that lasts at least a few days the foot changing shape difficulty walking your shoes rubbing corns or calluses, and nail problems Infections such as athletes foot, verruca or bacterial infections. If these problems are left untreated, they can lead to the infections spreading and, eventually, to ulcers forming. It is therefore important to see a podiatrist, who specialises in general foot care. They can give advice on footwear, information on how to treat foot problems yourself, and can provide special insoles. They can also monitor your foot and general health, and will refer you to a consultant if they find any issues. There may be a podiatrist in the rheumatology department where you receive your care, or you may get a referral to an NHS podiatrist. GPs can also refer you to community-based services. Complementary treatments Complementary treatments can be useful when used alongside prescribed medicines for the treatment of rheumatoid arthritis. However, they should not replace your prescribed medicines and you should talk to your rheumatology team before starting a complementary treatment. Generally complementary treatments aren't considered to be evidence-based and are therefore not usually available on the NHS. Living with rheumatoid arthritis Occupational therapy Occupational therapists can help you keep doing the activities you need or want to do – at home or at work. They will work with you to find different ways of doing things. The benefits of seeing an occupational therapist include: improved confidence being able to do more things, at home or at work being able to live independently at home allowing you to return to or stay in work. Ask your GP about occupational therapists that are local to you. If you regularly see a social worker, nurse or other health care professional, they can help you contact an occupational therapist through health or social services. Be prepared to describe any difficulties you have and how they are affecting your life, or the lives of those who care for you. You may want to know how long it will be until you get an appointment, so remember to ask if there is a waiting list. You can also see an occupational therapist privately. You will be able to get an appointment quicker, but it will cost you money. Further support If you are living with rheumatoid arthritis, you may also be living with one or more other conditions. This is not unusual – 54% of those aged over 65 in England are living with two or more long-term conditions. Depression is the most common condition among people with rheumatoid arthritis, affecting one in six people. If you are feeling low, talk to your GP, who can signpost you to the appropriate services. You can also call the arthritis helpline for free on 0800 5200 520, where our trained advisors can give you help and support. We’re open from 9am to 8pm, Monday to Friday, except for bank holidays. If you're over the age of 55, The Silver Line is there 24 hours a day, 365 days a year to provide information, support and friendship. Surgery Surgery is sometimes needed for those with rheumatoid arthritis. This can be to reduce pain, correct joint shape or restore your ability to use your joint. The types of surgery people with rheumatoid arthritis undergo are: Foot Surgery Examples of this type of surgery include: Removal of inflamed tissues around the joints of the forefoot. Removal of the small joints in the ball of the foot. Straightening of toes. Fixation of joints. Finger, hand and wrist surgery Examples of this type of surgery include: carpal tunnel release removal of inflamed tissue in the finger joints release of tendons in the fingers (this is used to treat unusual bending). Arthroscopy Arthroscopy is used to remove inflamed joint tissue. During the operation, an arthroscope is inserted into the joint through a small cut in the skin, so the surgeon can see the affected joint. Damaged tissue is then removed. You usually don't have to stay overnight in hospital for this type of surgery, but the joint will need to be rested at home for several days. Joint replacement Some people with rheumatoid arthritis need surgery to replace part or all of a joint - this is known as a joint replacement, or arthroplasty. Common joint replacements include the hip, knee and shoulder. Replacement of these joints is a major operation that involves several days in hospital, followed by rehabilitation, which can take months. The latest joints generally last for 10 to 20 years, and there is no guarantee that the new joint will be fully functional. Supplements There is little evidence that taking supplements will improve rheumatoid arthritis, or its symptoms. However, some people think certain supplements work for them. What is important is that you are not wasting your money on expensive supplements that won’t do anything for your condition. Some supplements may be prescribed by your specialist team or GP. For example, folic acid may be prescribed if you are taking methotrexate, and calcium and vitamin D may be prescribed if you are taking steroids. A healthy, balanced diet should contain all the vitamins and minerals you need. However, it’s recommended that people should consider taking a daily supplement containing 10 micrograms of vitamin D in autumn and winter, as it is difficult to get the amount needed through sunlight at this time of year. It’s also recommended that people whose skin has little or no exposure to the sun should take a vitamin D supplement throughout the year. This could include people in care homes and people who cover their skin when outside. Ethnic minority groups with dark skin – from African, Afro-Caribbean and South Asian backgrounds – should also consider taking a supplement throughout the year, as they may not get enough vitamin D from sunlight in the summer. Sex and relationships Most couples – whether they have arthritis or not – go through phases when their sex life is less exciting or satisfying than it was. There may be physical reasons for this, but emotional factors and stress often play a part. Arthritis can present a number of challenges in a relationship, including the following: Pain and fatigue may reduce your enjoyment of sex, and other activities and interests that you share with your partner. Arthritis may mean that you can’t always manage the household jobs you usually do, or you may need help with them. If your arthritis affects your work, it may lead to financial worries. Having arthritis may affect your mood and self-esteem. Your partner will be concerned about how the condition is affecting you. Occupational therapy Occupational therapists can help you keep doing the activities you need or want to do – at home or at work. They will work with you to find different ways of doing things. The benefits of seeing an occupational therapist include: improved confidence being able to do more things, at home or at work being able to live independently at home allowing you to return to or stay in work. Ask your GP about occupational therapists that are local to you. If you regularly see a social worker, nurse or other health care professional, they can help you contact an occupational therapist through health or social services. Be prepared to describe any difficulties you have and how they are affecting your life, or the lives of those who care for you. You may want to know how long it will be until you get an appointment, so remember to ask if there is a waiting list. You can also see an occupational therapist privately. You will be able to get an appointment quicker, but it will cost you money. Aids and minor adaptations you receive form your local council should not be means-tested, meaning that no matter how much money you have, the local authority has to provide you with them. If you live in Wales, Scotland or Northern Ireland, contact your GP or local council for information about access to these items. Further support If you are living with rheumatoid arthritis, you may also be living with one or more other conditions. This is not unusual – 54% of those aged over 65 in England are living with two or more long-term conditions. Depression is the most common condition among people with rheumatoid arthritis, affecting one in six people. If you are feeling low, talk to your GP, who can signpost you to the appropriate services. You can also call the arthritis helpline for free on 0800 5200 520, where our trained advisors can give you help and support. We’re open from 9am to 8pm, Monday to Friday, except for bank holidays. If you're over the age of 55, The Silver Line is there 24 hours a day, 365 days a year to provide information, support and friendship. If you identify as gay, lesbian, bisexual or transgender, Switchboard is available from 10am–11pm, 365 days a year, to listen to any problems you're having. Surgery Surgery is sometimes needed for those with rheumatoid arthritis. This can be to reduce pain, correct joint shape or restore your ability to use your joint. The types of surgery people with rheumatoid arthritis undergo are: Foot Surgery Examples of this type of surgery include: Removal of inflamed tissues around the joints of the forefoot. Removal of the small joints in the ball of the foot. Straightening of toes. Fixation of joints. Finger, hand and wrist surgery Examples of this type of surgery include: carpal tunnel release removal of inflamed tissue in the finger joints release of tendons in the fingers (this is used to treat unusual bending). Arthroscopy Arthroscopy is used to remove inflamed joint tissue. During the operation, an arthroscope is inserted into the joint through a small cut in the skin, so the surgeon can see the affected joint. Damaged tissue is then removed. You usually don't have to stay overnight in hospital for this type of surgery, but the joint will need to be rested at home for several days. Joint replacement Some people with rheumatoid arthritis need surgery to replace part or all of a joint - this is known as a joint replacement, or arthroplasty. Common joint replacements include the hip, knee and shoulder. Replacement of these joints is a major operation that involves several days in hospital, followed by rehabilitation, which can take months. The latest joints generally last for 10 to 20 years, and there is no guarantee that the new joint will be fully functional. Supplements There is little evidence that taking supplements will improve rheumatoid arthritis, or its symptoms. However, some people think certain supplements work for them. What is important is that you are not wasting your money on expensive supplements that won’t do anything for your condition. Some supplements may be prescribed by your specialist team or GP. For example, folic acid may be prescribed if you are taking methotrexate, and calcium and vitamin D may be prescribed if you are taking steroids. A healthy, balanced diet should contain all the vitamins and minerals you need. However, it’s recommended that people should consider taking a daily supplement containing 10 micrograms of vitamin D in autumn and winter, as it is difficult to get the amount needed through sunlight at this time of year. It’s also recommended that people whose skin has little or no exposure to the sun should take a vitamin D supplement throughout the year. This could include people in care homes and people who cover their skin when outside. Ethnic minority groups with dark skin – from African, Afro-Caribbean and South Asian backgrounds – should also consider taking a supplement throughout the year, as they may not get enough vitamin D from sunlight in the summer. Sleep Getting a good night’s sleep can be tough, especially when you are living with the aches, pains and inflammation of rheumatoid arthritis. For more information on how to get a good night’s sleep, see our Sleep and Arthritis booklet, or visit The Sleep Council website. Sex and relationships Most couples – whether they have arthritis or not – go through phases when their sex life is less exciting or satisfying than it was. There may be physical reasons for this, but emotional factors and stress often play a part. Arthritis can present a number of challenges in a relationship, including the following: Pain and fatigue may reduce your enjoyment of sex, and other activities and interests that you share with your partner. Arthritis may mean that you can’t always manage the household jobs you usually do, or you may need help with them. If your arthritis affects your work, it may lead to financial worries. Having arthritis may affect your mood and self-esteem. Your partner will be concerned about how the condition is affecting you. Research and new developments Here, we round up some of the latest developments in rheumatoid arthritis research. Our previous research has: led to the development of a new type of drug. These drugs are called ‘biological therapies’ and have transformed the lives of people with rheumatoid arthritis over the past 20 years. highlighted the importance of starting early, intensive treatment for inflammatory arthritis within 12 weeks of symptoms starting. It has also led to the introduction of a best practice tariff for those with rheumatoid arthritis, which means people are being diagnosed quicker. We're currently funding research projects to find out what causes rheumatoid arthritis, and to develop new and improved treatments. For example: our centre for genetics and genomics is trying to understand how genetic factors determine whether certain people are at risk of developing inflammatory arthritis, and what happens when they do our rheumatoid arthritis pathogenesis centre of excellence is looking at why rheumatoid arthritis starts, why it attacks the joints, and why the inflammation carries on rather than switching off investigating how the organisms that live on our skin and in our gut differ in those with rheumatoid arthritis and how this affects a person’s response to treatment. Keri's story I was in my third year of university, studying to be a primary school teacher. Suddenly, one morning, my thumbs became very painful. Then my elbows became stiff and sore, and I couldn’t straighten my arms. At first I only had symptoms in the morning, but eventually I had them all the time. Quite a few of my joints were stiff and painful, which meant I couldn’t get around very well. I was also tired a lot. When this happened, my GP referred me to a rheumatologist. I graduated from my teacher training course two years later than planned, but have not been able to work as a teacher yet, due to my arthritis. However, I have used my teaching skills to volunteer for Versus Arthritis, leading self-management courses in Northern Ireland, which I find extremely enjoyable and rewarding. I am also the Chairperson of my local Versus Arthritis support group. Baking is one of my hobbies, although using certain kitchen equipment can be difficult. Being social is important to me too and I enjoy going to cafés to catch up with my friends. When I’m in pain, I can distract myself by reading or listening to music. Exercise is important to me too, as I find that doing some gentle exercises makes my joints less painful. There are a few chair-based exercises I do regularly and I also enjoy going for short walks. Swimming is great too and I find that doing exercises in the heated water of the hydrotherapy pool makes me feel less stiff and sore. Medication-wise, I’m currently using a biological injection called Enbrel. I’ve been using it for five years and inject myself once a week. It’s really helped to control my condition and my flare-ups happen less often. At the moment, I’m doing ok. There are good days and bad days. I still experience pain every day, but am doing much better than when I was first diagnosed. I have fewer flare ups, which shows that the medication I’m using is really helping me. My advice to anyone who has recently been diagnosed with rheumatoid arthritis would be to join a support group. Talking to another person who has the same condition as you and knows what you’re going through is really useful and reassuring. It’s helped me a lot in my journey. I’d also say that getting a good night’s sleep is important, as it can help your body recover from the effects of your arthritis. It’s also important for me to learn more about my condition, as it helps me to understand what my body is going through. I really do believe that knowledge is power!
If they think you have rheumatoid arthritis, they will refer you to a rheumatologist and may arrange blood tests to help confirm a diagnosis. Blood tests There's no single blood test that can confirm you have rheumatoid arthritis. However, there are a few tests that can show possible signs of the condition. Some of the main tests are outlined below. Erythrocyte sedimentation rate (ESR) A sample of your red blood cells are put into a test tube of liquid. The cells are timed to see how long they take to get to the bottom of the tube. If the cells sink faster than usual, you may have levels of inflammation that are higher than normal. Rheumatoid arthritis is just one possible cause. C-reactive protein (CRP) This test can show if there is inflammation in your body. It does this by checking how much CRP there is in your blood. If there is more CRP than usual, you may have inflammation in your body. Full blood count A full blood count measures the number of red blood cells you have. These carry iron around your body, and a low number of red blood cells means you have a low iron content. This may mean you have anaemia (an-ee-me-er) and is common in people with RA, although having anaemia doesn't prove you have RA. Rheumatoid factor and anti-CCP antibodies About half of all people with rheumatoid arthritis have rheumatoid factor in their blood when the condition starts. However, around 1 in every 20 people without rheumatoid arthritis also test positive for rheumatoid factor. There is another antibody test called anti-CCP that you can take. People who test positive for anti-CCP are very likely to get rheumatoid arthritis. However, not everyone that has the condition has this antibody. Scans Scans may be used to check for joint inflammation and damage.
no
Rheumatoid
Can Rheumatoid Arthritis be diagnosed with a blood test?
no_statement
"rheumatoid" arthritis cannot be "diagnosed" with a "blood" "test".. a "blood" "test" is not sufficient to "diagnose" "rheumatoid" arthritis.
https://naomedical.com/info/can-rheumatoid-arthritis-be-diagnosed-with-a-blood-test.html
Can Rheumatoid Arthritis Be Diagnosed With A Blood Test - Nao ...
Can Rheumatoid Arthritis Be Diagnosed With A Blood Test Rheumatoid arthritis (RA) is a chronic autoimmune disease that primarily affects the joints. It is characterized by inflammation, pain, and stiffness, and can lead to joint deformity and disability if left untreated. Early diagnosis and treatment are crucial for managing the symptoms and preventing long-term complications. The Role of Blood Tests in Diagnosing Rheumatoid Arthritis Blood tests play a significant role in the diagnosis of rheumatoid arthritis. While there is no single test that can definitively diagnose RA, certain blood markers can indicate the presence of the disease and help healthcare professionals make an accurate diagnosis. Rheumatoid Factor (RF) One of the blood tests commonly used to diagnose RA is the rheumatoid factor (RF) test. RF is an antibody that is present in the blood of many people with RA. However, it is important to note that not all individuals with RA have a positive RF test, and some individuals without RA may have a positive RF test. Therefore, RF alone is not sufficient for a definitive diagnosis of RA. Anti-Cyclic Citrullinated Peptide (anti-CCP) Antibodies Another blood test that is often used in the diagnosis of RA is the anti-cyclic citrullinated peptide (anti-CCP) antibody test. Anti-CCP antibodies are specific to RA and are rarely found in individuals without the disease. A positive anti-CCP test, along with other clinical findings, can support a diagnosis of RA. Erythrocyte Sedimentation Rate (ESR) and C-Reactive Protein (CRP) Erythrocyte sedimentation rate (ESR) and C-reactive protein (CRP) are markers of inflammation that can be elevated in individuals with RA. While these tests are not specific to RA and can be elevated in other conditions as well, they can provide additional evidence of inflammation in the body. Importance of Early Detection Early detection of rheumatoid arthritis is crucial for initiating timely treatment and preventing joint damage. Studies have shown that early intervention with disease-modifying antirheumatic drugs (DMARDs) can significantly improve outcomes and slow down the progression of the disease. Therefore, if you are experiencing joint pain, stiffness, and swelling, it is important to consult a healthcare professional for an accurate diagnosis and appropriate treatment. How Medical Health Authority Can Help At Medical Health Authority, we understand the challenges of living with rheumatoid arthritis and the importance of early detection. Our comprehensive healthcare solutions are designed to provide superior quality multispeciality services to meet all of our patients' needs. Our team of experienced healthcare professionals can guide you through the diagnostic process, including blood tests, and develop a personalized treatment plan tailored to your specific condition. Schedule a consultation with Medical Health Authority today to discuss your symptoms and explore diagnostic options. Early detection and timely treatment can make a significant difference in managing rheumatoid arthritis and improving your quality of life. Frequently Asked Questions 1. Can rheumatoid arthritis be diagnosed with a blood test alone? No, rheumatoid arthritis cannot be diagnosed with a blood test alone. While certain blood markers can indicate the presence of the disease, a comprehensive evaluation that includes clinical examination, medical history, and imaging tests is necessary for an accurate diagnosis. 2. What other tests may be done to diagnose rheumatoid arthritis? In addition to blood tests, imaging tests such as X-rays, ultrasounds, and magnetic resonance imaging (MRI) may be done to assess joint damage and inflammation. These tests can provide valuable information to support the diagnosis of rheumatoid arthritis. 3. Can rheumatoid arthritis be cured? Currently, there is no cure for rheumatoid arthritis. However, with early diagnosis and appropriate treatment, the symptoms can be managed effectively, and joint damage can be minimized. 4. What are the treatment options for rheumatoid arthritis? The treatment of rheumatoid arthritis typically involves a combination of medication, physical therapy, and lifestyle modifications. Disease-modifying antirheumatic drugs (DMARDs) are commonly prescribed to slow down the progression of the disease and reduce inflammation. 5. How can I manage the symptoms of rheumatoid arthritis? In addition to medical treatment, there are several self-care strategies that can help manage the symptoms of rheumatoid arthritis. These include regular exercise, maintaining a healthy weight, applying heat or cold to affected joints, and using assistive devices to reduce joint stress. 6. Can rheumatoid arthritis affect other parts of the body? Yes, rheumatoid arthritis can affect other parts of the body besides the joints. It can cause inflammation in the eyes, lungs, heart, and blood vessels, leading to complications in these organs. 7. Is rheumatoid arthritis hereditary? While there is a genetic component to rheumatoid arthritis, having a family history of the disease does not guarantee that an individual will develop it. Other factors, such as environmental triggers, also play a role in the development of rheumatoid arthritis. 9. Can rheumatoid arthritis lead to disability? If left untreated or poorly managed, rheumatoid arthritis can lead to joint deformity and disability. However, with early diagnosis, appropriate treatment, and lifestyle modifications, the risk of disability can be significantly reduced. Disclaimer: The content in this article is provided for general informational purposes only. It may not be accurate, complete, or up-to-date and should not be relied upon as medical, legal, financial, or other professional advice. Any actions or decisions taken based on this information are the sole responsibility of the user. Medical Health Authority expressly disclaims any liability for any loss, damage, or harm that may result from reliance on this information. Please note that this article may contain affiliate endorsements and advertisements. The inclusion of such does not indicate an endorsement or approval of the products or services linked. Medical Health Authority does not accept responsibility for the content, accuracy, or opinions expressed on any linked website. When you engage with these links and decide to make a purchase, we may receive a percentage of the sale. This affiliate commission does not influence the price you pay, and we disclaim any responsibility for the products or services you purchase through these links.
Frequently Asked Questions 1. Can rheumatoid arthritis be diagnosed with a blood test alone? No, rheumatoid arthritis cannot be diagnosed with a blood test alone. While certain blood markers can indicate the presence of the disease, a comprehensive evaluation that includes clinical examination, medical history, and imaging tests is necessary for an accurate diagnosis. 2. What other tests may be done to diagnose rheumatoid arthritis? In addition to blood tests, imaging tests such as X-rays, ultrasounds, and magnetic resonance imaging (MRI) may be done to assess joint damage and inflammation. These tests can provide valuable information to support the diagnosis of rheumatoid arthritis. 3. Can rheumatoid arthritis be cured? Currently, there is no cure for rheumatoid arthritis. However, with early diagnosis and appropriate treatment, the symptoms can be managed effectively, and joint damage can be minimized. 4. What are the treatment options for rheumatoid arthritis? The treatment of rheumatoid arthritis typically involves a combination of medication, physical therapy, and lifestyle modifications. Disease-modifying antirheumatic drugs (DMARDs) are commonly prescribed to slow down the progression of the disease and reduce inflammation. 5. How can I manage the symptoms of rheumatoid arthritis? In addition to medical treatment, there are several self-care strategies that can help manage the symptoms of rheumatoid arthritis. These include regular exercise, maintaining a healthy weight, applying heat or cold to affected joints, and using assistive devices to reduce joint stress. 6. Can rheumatoid arthritis affect other parts of the body? Yes, rheumatoid arthritis can affect other parts of the body besides the joints. It can cause inflammation in the eyes, lungs, heart, and blood vessels, leading to complications in these organs. 7. Is rheumatoid arthritis hereditary?
no
Rheumatoid
Can Rheumatoid Arthritis be diagnosed with a blood test?
no_statement
"rheumatoid" arthritis cannot be "diagnosed" with a "blood" "test".. a "blood" "test" is not sufficient to "diagnose" "rheumatoid" arthritis.
https://www.ccjm.org/content/86/3/198
Laboratory tests in rheumatology: A rational approach | Cleveland ...
ABSTRACT Laboratory tests are useful in diagnosing rheumatic diseases, but clinicians should be aware of the limitations of these tests. This article uses case vignettes to provide practical and evidence-based guidance on requesting and interpreting selected tests, including rheumatoid factor, anticitrullinated peptide antibody, antinuclear antibody, antiphospholipid antibodies, antineutrophil cytoplasmic antibody, and human leukocyte antigen-B27. KEY POINTS If a test was requested without a clear indication and the result is positive, it is important to bear in mind the potential pitfalls associated with that test; immunologic tests have limited specificity. A positive rheumatoid factor or anticitrullinated peptide antibody test can help diagnose rheumatoid arthritis in a patient with early polyarthritis. A positive HLA-B27 test can help diagnose ankylosing spondylitis in patients with inflammatory back pain and normal imaging. Positive antinuclear cytoplasmic antibody (ANCA) can help diagnose ANCA-associated vasculitis in a patient with glomerulonephritis. A negative antinuclear antibody test reduces the likelihood of lupus in a patient with joint pain. Laboratory tests are often ordered inappropriately for patients in whom a rheumatologic illness is suspected; this occurs in both primary and secondary care.1 Some tests are available both singly and as part of a battery of tests screening healthy people without symptoms. The problem: negative test results are by no means always reassuring, and false-positive results raise the risks of unnecessary anxiety for patients and clinicians, needless referrals, and potential morbidity due to further unnecessary testing and exposure to wrong treatments.2 Clinicians should be aware of the pitfalls of these tests in order to choose them wisely and interpret the results correctly. This article provides practical guidance on requesting and interpreting some common tests in rheumatology, with the aid of case vignettes. RHEUMATOID FACTOR AND ANTICITRULLINATED PEPTIDE ANTIBODY A 41-year-old woman, previously in good health, presents to her primary care practitioner with a 6-week history of pain and swelling in her hands and early morning stiffness lasting about 2 hours. She denies having any extraarticular symptoms. Physical examination reveals synovitis across her right metacarpophalangeal joints, proximal interphalangeal joint of the left middle finger, and left wrist. The primary care physician is concerned that her symptoms might be due to rheumatoid arthritis. Would testing for rheumatoid factor and anticitrullinated peptide antibody be useful in this patient? Rheumatoid factor is an antibody (immunoglobulin M, IgG, or IgA) targeted against the Fc fragment of IgG.3 It was so named because it was originally detected in patients with rheumatoid arthritis, but it is neither sensitive nor specific for this condition. A meta-analysis of more than 5,000 patients with rheumatoid arthritis reported that rheumatoid factor testing had a sensitivity of 69% and specificity of 85%.4 Numerous other conditions can be associated with a positive test for rheumatoid factor (Table 1). Hence, a diagnosis of rheumatoid arthritis cannot be confirmed with a positive result alone, nor can it be excluded with a negative result. Anticitrullinated peptide antibody, on the other hand, is much more specific for rheumatoid arthritis (95%), as it is seldom seen in other conditions, but its sensitivity is similar to that of rheumatoid factor (68%).4–6 A positive result would thus lend strength to the diagnosis of rheumatoid arthritis, but a negative result would not exclude it. Approach to early arthritis When faced with a patient with early arthritis, some key questions to ask include7,8: Is this an inflammatory or a mechanical problem? Inflammatory arthritis is suggested by joint swelling that is not due to trauma or bony hypertrophy, early morning stiffness lasting longer than 30 minutes, and elevated inflammatory markers (erythrocyte sedimentation rate or C-reactive protein). Involvement of the small joints of the hands and feet may be suggested by pain on compression of the metacarpophalangeal and metatarsophalangeal joints, respectively. Is there a definite identifiable underlying cause for the inflammatory arthritis? The pattern of development of joint symptoms or the presence of extraarticular symptoms may suggest an underlying problem such as gout, psoriatic arthritis, systemic lupus erythematosus, or sarcoidosis. If the arthritis is undifferentiated (ie, there is no definite identifiable cause), is it likely to remit or persist? This is perhaps the most important question to ask in order to prognosticate. Patients with risk factors for persistent disease, ie, for development of rheumatoid arthritis, should be referred to a rheumatologist early for timely institution of disease-modifying antirheumatic drug therapy.9 Multiple studies have shown that patients in whom this therapy is started early have much better clinical, functional, and radiologic outcomes than those in whom it is delayed.10–12 The revised American College of Rheumatology and European League Against Rheumatism criteria13 include the following factors as predictors of persistence: Number of involved joints (with greater weight given to involvement of small joints) If both rheumatoid factor and anticitrullinated peptide antibody are positive in a patient with early undifferentiated arthritis, the risk of progression to rheumatoid arthritis is almost 100%, thus underscoring the importance of testing for these antibodies.5,6 Referral to a rheumatologist should, however, not be delayed in patients with negative test results (more than one-third of patients with rheumatoid arthritis may be negative for both), and should be considered in those with inflammatory joint symptoms persisting longer than 6 weeks, especially with involvement of the small joints (sparing the distal interphalangeals) and elevated acute-phase response. Rheumatoid factor in healthy people without symptoms In some countries, testing for rheumatoid factor is offered as part of a battery of screening tests in healthy people who have no symptoms, a practice that should be strongly discouraged. Multiple studies, both prospective and retrospective, have demonstrated that both rheumatoid factor and anticitrullinated peptide antibody may be present several years before the clinical diagnosis of rheumatoid arthritis.6,14–16 But the risk of developing rheumatoid arthritis for asymptomatic individuals who are rheumatoid factor-positive depends on the rheumatoid factor titer, positive family history of rheumatoid arthritis in first-degree relatives, and copresence of anticitrullinated peptide antibody. The absolute risk, nevertheless, is still very small. In some, there might be an alternative explanation such as undiagnosed Sjögren syndrome or hepatitis C. In any event, no strategy is currently available that is proven to prevent the development of rheumatoid arthritis, and there is no role for disease-modifying therapy during the preclinical phase.16 Although her rheumatoid factor and anticitrullinated peptide antibody tests are negative, she is referred to a rheumatologist be cause she has predictors of persistent disease, ie, symptom duration of 6 weeks, involvement of the small joints of the hands, and elevated erythrocyte sedimentation rate and C-reactive protein. The rheumatologist checks her parvovirus serology, which is negative. The patient is given parenteral depot corticosteroid therapy, to which she responds briefly. Because her symptoms persist and continue to worsen, methotrexate treatment is started after an additional 6 weeks. ANTINUCLEAR ANTIBODY A 37-year-old woman presents to her primary care physician with the complaint of tiredness. She has a family history of systemic lupus erythematosus in her sister and maternal aunt. She is understandably worried about lupus because of the family history and is asking to be tested for it. Would testing for antinuclear antibody be reasonable? Antinuclear antibody is not a single antibody but rather a family of autoantibodies that are directed against nuclear constituents such as single- or double-stranded deoxyribonucleic acid (dsDNA), histones, centromeres, proteins complexed with ribonucleic acid (RNA), and enzymes such as topoisomerase.17,18 Protein antigens complexed with RNA and some enzymes in the nucleus are also known as extractable nuclear antigens (ENAs). They include Ro, La, Sm, Jo-1, RNP, and ScL-70 and are named after the patient in whom they were first discovered (Robert, Lavine, Smith, and John), the antigen that is targeted (ribonucleoprotein or RNP), and the disease with which they are associated (anti-ScL-70 or antitopoisomerase in diffuse cutaneous scleroderma). Antinuclear antibody testing is commonly requested to exclude connective tissue diseases such as lupus, but the clinician needs to be aware of the following points: Antinuclear antibody may be encountered in conditions other than lupus Infection with organisms that share the epitope with self-antigens (molecular mimicry) Cancers Drugs such as hydralazine, procainamide, and minocycline. Antinuclear antibody might also be produced by the healthy immune system from time to time to clear the nuclear debris that is extruded from aging cells. A study in healthy individuals20 reported a prevalence of positive antinuclear antibody of 32% at a titer of 1/40, 15% at a titer of 1/80, 7% at a titer of 1/160, and 3% at a titer of 1/320. Importantly, a positive result was more common among family members of patients with autoimmune connective tissue diseases.21 Hence, a positive antinuclear antibody result does not always mean lupus. Antinuclear antibody testing is highly sensitive for lupus With current laboratory methods, antinuclear antibody testing has a sensitivity close to 100%. Hence, a negative result virtually rules out lupus. Two methods are commonly used to test for antinuclear antibody: indirect immuno-fluorescence and enzyme-linked immunosorbent assay (ELISA).22 While human epithelial (Hep2) cells are used as the source of antigen in immunofluorescence, purified nuclear antigens coated on multiple-well plates are used in ELISA. Although ELISA is simpler to perform, immunofluorescence has a slightly better sensitivity (because the Hep2 cells express a wide range of antigens) and is still considered the gold standard. As expected, the higher sensitivity occurs at the cost of reduced specificity (about 60%), so antinuclear antibody will also be detected in all the other conditions listed above.23 To improve the specificity of antinuclear antibody testing, laboratories report titers (the highest dilution of the test serum that tested positive); a cutoff of greater than 1/80 is generally considered significant. Do not order antinuclear antibody testing indiscriminately If the antinuclear antibody test is requested indiscriminately, the positive predictive value for the diagnosis of lupus is only 11%.24 The test should be requested only when the pretest probability of lupus or other connective tissue disease is high. The positive predictive value is much higher in patients presenting with clinical or laboratory manifestations involving 2 or more organ systems (Table 2).18,25 Categorization of the specific antigen target improves disease specificity. The antinuclear antibody in patients with lupus may be targeted against single- or double-stranded DNA, histones, or 1 or more of the ENAs. Among these, the presence of anti-dsDNA or anti-Sm is highly specific for a diagnosis of lupus (close to 100%). Neither is sensitive for lupus, however, with anti-dsDNA present in only 60% of patients with lupus and anti-Sm in about 30%.17 Hence, patients with a positive antinuclear antibody and negative anti-dsDNA and anti-Sm may continue to pose a diagnostic challenge. Other examples of specific disease associations are listed in Table 3. To sum up, the antinuclear antibody test should be requested only in patients with involvement of multiple organ systems. Although a negative result would make it extremely unlikely that the clinical presentation is due to lupus, a positive result is insufficient on its own to make a diagnosis of lupus. Diagnosing lupus is straightforward when patients present with a specific manifestation such as inflammatory arthritis, photosensitive skin rash, hemolytic anemia, thrombocytopenia, or nephritis, or with specific antibodies such as those against dsDNA or Sm. Patients who present with nonspecific symptoms such as arthralgia or tiredness with a positive antinuclear antibody and negative anti-dsDNA and anti-Sm may present difficulties even for the specialist.25–27 Her primary care physician decides to check her complete blood cell count, erythrocyte sedimentation rate, and thyroid-stimulating hormone level. Although she is reassured that her tiredness is not due to lupus, she insists on getting an antinuclear antibody test. Her complete blood cell counts are normal. Her erythrocyte sedimentation rate is 6 mm/hour. However, her thyroid-stimulating hormone level is elevated, and subsequent testing shows low free thyroxine and positive thyroid peroxidase antibodies. The antinuclear antibody is positive in a titer of 1/80 and negative for anti-dsDNA and anti-ENA. We explain to her that the positive antinuclear antibody is most likely related to her autoimmune thyroid disease. She is referred to an endocrinologist. ANTIPHOSPHOLIPID ANTIBODIES A 24-year-old woman presents to the emergency department with acute unprovoked deep vein thrombosis in her right leg, confirmed by ultrasonography. She has no history of previous thrombosis, and the relevant family history is unremarkable. She has never been pregnant. Her platelet count is 84 × 109/L (reference range 150–400), and her baseline activated partial thromboplastin time is prolonged at 62 seconds (reference range 23.0–32.4). The rest of her blood counts and her prothrombin time, liver enzyme levels, and serum creatinine level are normal. Should this patient be tested for antiphospholipid antibodies? Antiphospholipid antibodies are important because of their association with thrombotic risk (both venous and arterial) and pregnancy morbidity. The name is a misnomer, as these antibodies are targeted against some proteins that are bound to phospholipids and not only to the phospholipids themselves. According to the modified Sapporo criteria for the classification of antiphospholipid syndrome,28 antiphospholipid antibodies should remain persistently positive on at least 2 separate occasions at least 12 weeks apart for the result to be considered significant because some infections and drugs may be associated with the transient presence of antiphospholipid antibodies. Screening for antiphospholipid antibodies should include testing for IgM and IgG anticardiolipin antibodies, lupus anticoagulant, and IgM and IgG beta-2 glycoprotein I antibodies.29,30 Anticardiolipin antibodies Anticardiolipin (aCL) antibodies may be targeted either against beta-2 glycoprotein I (beta-2GPI) that is bound to cardiolipin (a phospholipid) or against cardiolipin alone; the former is more specific. Antibodies directed against cardiolipin alone are usually transient and are associated with infections and drugs. The result is considered significant only when anticardiolipin antibodies are present in a medium to high titer (> 40 IgG phospholipid units or IgM phospholipid units, or > 99th percentile). Lupus anticoagulant The antibody with “lupus anticoagulant activity” is targeted against prothrombin plus phospholipid or beta-2GPI plus phospholipid. The test for it is a functional assay involving 3 steps: Demonstrating the prolongation of a phospholipid-dependent coagulation assay like the activated partial thromboplastin time (aPTT). (This may explain the prolongation of aPTT in the patient described in the vignette.) Although the presence of lupus anticoagulant is associated with thrombosis, it is called an “anticoagulant” because of this in vitro prolongation of phospholipid-dependent coagulation assays. Mixing study. The phospholipid-dependent coagulation assay could be prolonged because of either the deficiency of a coagulation factor or the presence of the antiphospholipid antibodies. This can be differentiated by mixing the patient’s plasma with normal plasma (which will have all the clotting factors) in a 1:1 ratio. If the coagulation assay remains prolonged after the addition of normal plasma, clotting factor deficiency can be excluded. Addition of a phospholipid. If the prolongation of the coagulation assay is due to the presence of an antiphospholipid antibody, addition of extra phospholipid will correct this. Beta-2 glycoprotein I antibody (anti-beta-2GPI) The beta-2GPI that is not bound to the cardiolipin can be detected by separately testing for beta-2GPI (the anticardiolipin test only detects the beta-2GPI that is bound to the cardiolipin). The result is considered significant if beta-2GPI is present in a medium to high titer (> 99th percentile). Studies have shown that antiphospho-lipid antibodies may be present in 1% to 5% of apparently healthy people in the general population.31 These are usually low-titer anticardiolipin or anti-beta-GPI IgM antibodies that are not associated with thrombosis or adverse pregnancy outcomes. Hence, the term antiphospholipid syndrome should be reserved for those who have had at least 1 episode of thrombosis or pregnancy morbidity and persistent antiphospholipid antibodies, and not those who have asymptomatic or transient antiphospholipid antibodies. Triple positivity (positive anticardiolipin, lupus anticoagulant, and anti-beta-2GPI) seems to be associated with the highest risk of thrombosis, with a 10-year cumulative incidence of 37.1% (95% confidence interval [CI] 19.9–54.3) for a first thrombotic event,32 and 44.2% (95% CI 38.6–49.8) for recurrent thrombosis.33 The association with thrombosis is stronger for lupus anticoagulant than with the other 2 antibodies, with different studies34 finding an odds ratio ranging from 5 to 16. A positive lupus anticoagulant test with or without a moderate to high titer of anticardiolipin or anti-beta-2GPI IgM or IgG constitutes a high-risk profile, while a moderate to high titer of anticardiolipin or anti-beta-2GPI IgM or IgG constitutes a moderate-risk profile. A low titer of anticardiolipin or anti-beta-2GPI IgM or IgG constitutes a low-risk profile that may not be associated with thrombosis.35 Antiphospholipid syndrome is important to recognize because of the need for long-term anticoagulation to prevent recurrence.36 It may be primary, when it occurs on its own, or secondary, when it occurs in association with another autoimmune disease such as lupus. Venous events in antiphospholipid syndrome most commonly manifest as lower-limb deep vein thrombosis or pulmonary embolism, while arterial events most commonly manifest as stroke or transient ischemic attack.37 Obstetric manifestations may include not only miscarriage and stillbirth, but also preterm delivery, intrauterine growth retardation, and preeclamp-sia, all occurring due to placental insufficiency. The frequency of antiphospholipid antibodies has been estimated as 13.5% in patients with stroke, 11% with myocardial infarction, 9.5% with deep vein thrombosis, and 6% for those with pregnancy morbidity.38 Some noncriteria manifestations have also been recognized in antiphospholipid syndrome, such as thrombocytopenia, cardiac vegetations (Libman-Sachs endocarditis), li-vedo reticularis, and nephropathy. The indications for antiphospholipid antibody testing are listed in Table 4.29 For the patient described in the vignette, it would be appropriate to test for antiphospholipid antibodies because of her unprovoked thrombosis, thrombocytopenia, and prolonged aPTT. Anticoagulant treatment is known to be associated with false-positive lupus anticoagulant, so any blood samples should be drawn before such treatment is commenced. Back to our patient Our patient’s anticardiolipin IgG test is negative, while her lupus anticoagulant and beta-2GPI IgG are positive. She has no clinical or laboratory features suggesting lupus. She is started on warfarin. After 3 months, the warfarin is interrupted for several days, and she is retested for all 3 antiphospholipid antibodies. Her beta-2GPI I IgG and lupus anticoagulant tests are again positive. Because of the persistent antiphospholipid antibody positivity and clinical history of deep vein thrombosis, her condition is diagnosed as primary antiphospholipid syndrome. She is advised to continue anticoagulant therapy indefinitely. ANTINEUTROPHIL CYTOPLASMIC ANTIBODY A 34-year-old man who is an injecting drug user presents with a 2-week history of fever, malaise, and generalized arthralgia. There are no localizing symptoms of infection. Notable findings on examination include a temperature of 38.0°C (100.4°F), needle track marks in his arms, nonblanching vasculitic rash in his legs, and a systolic murmur over the precordium. Two sets of blood cultures are drawn. Transthoracic echocardiography and the an-tineutrophil cytoplasmic antibody (ANCA) test are requested, as are screening tests for human immunodeficiency virus, hepatitis B, and hepatitis C. Was the ANCA test indicated in this patient? ANCAs are autoantibodies against antigens located in the cytoplasmic granules of neutrophils and monocytes. They are associated with small-vessel vasculitides such as granulomatosis with polyangiitis (GPA), microscopic polyangiitis (MPA), eosinophilic granulomatosis with polyangiitis (EGPA), and isolated pauciimmune crescentic glomerulonephritis, all collectively known as ANCA-associated vasculitis (AAV).39 Laboratory methods to detect ANCA include indirect immunofluorescence and antigen-specific enzyme immunoassays. Indirect immunofluorescence only tells us whether or not an antibody that is targeting a cytoplasmic antigen is present. Based on the indirect im-munofluorescent pattern, ANCA can be classified as follows: Perinuclear or p-ANCA (if the targeted antigen is located just around the nucleus and extends into it) Cytoplasmic or c-ANCA (if the targeted antigen is located farther away from the nucleus) Atypical ANCA (if the indirect immuno-fluorescent pattern does not fit with either p-ANCA or c-ANCA). Indirect immunofluorescence does not give information about the exact antigen that is targeted; this can only be obtained by performing 1 of the antigen-specific immunoassays. The target antigen for c-ANCA is usually protein-ase-3 (PR3), while that for p-ANCA could be myeloperoxidase (MPO), cathepsin, lysozyme, lactoferrin, or bactericidal permeability inhibitor. Anti-PR3 is highly specific for GPA, while anti-MPO is usually associated with MPA and EGPA. Less commonly, anti-PR3 may be seen in patients with MPA and anti-MPO in those with GPA. Hence, there is an increasing trend toward classifying ANCA-associated vasculitis into PR3-associated or MPO-associated vasculitis rather than as GPA, MPA, EGPA, or renal-limited vasculitis.40 Several audits have shown that the ANCA test is widely misused and requested indiscriminately to rule out vasculitis. This results in a lower positive predictive value, possible harm to patients due to increased false-positive rates, and increased burden on the laboratory.41–43 At least 2 separate groups have demonstrated that a gating policy that refuses ANCA testing in patients without clinical evidence of systemic vasculitis can reduce the number of inappropriate requests, improve the diagnostic yield, and make it more clinically relevant and cost-effective.44,45 The clinician should bear in mind that: ANCA testing should be requested only if the pretest probability of ANCA-associated vasculitis is high. The indications proposed by the International Consensus Statement on ANCA testing46 are listed in Table 5. These criteria have been clinically validated, with 1 study even demonstrating that no cases of ANCA-associated vasculitis would be missed if these guidelines are followed.47 Current guidelines recommend using one of the antigen-specific assays for PR3 and MPO as the primary screening method.48 Until recently, indirect immunofluorescence was used to screen for ANCA-associated vasculitis, and positive results were confirmed by ELISA to detect ANCAs specific for PR3 and MPO,49 but this is no longer recommended because of recent evidence suggesting a large variability between the different indirect immunofluorescent methods and improved diagnostic performance of the antigen-specific assays. In a large multicenter study by Damoiseaux et al, the specificity with the different antigen-specific immunoassays was 98% to 99% for PR3-ANCA and 96% to 99% for MPO-ANCA.50 ANCA-associated vasculitis should not be considered excluded if the PR3 and MPO-ANCA are negative. In the Damoiseaux study, about 11% to 15% of patients with GPA and 8% to 24% of patients with MPA tested negative for both PR3 and MPO-ANCA.50 If the ANCA result is negative and clinical suspicion for ANCA-associated vasculitis is high, the clinician may wish to consider requesting another immunoassay method or indirect immunofluorescence. Results of indirect immunofluorescent testing results may be positive in those with a negative immunoassay, and vice versa. A positive ANCA result is not diagnostic of ANCA-associated vasculitis. Numerous other conditions are associated with ANCA, usually p-ANCA or atypical ANCA (Table 6). The antigens targeted by these ANCAs are usually cathepsin, lysozyme, lactoferrin, and bactericidal permeability inhibitor. Thus, the ANCA result should always be interpreted in the context of the whole clinical picture.51 Biopsy should still be considered the gold standard for the diagnosis of AN-CA-associated vasculitis. The ANCA titer can help to improve clinical interpretation, because the likelihood of ANCA-associated vasculitis increases with higher levels of PR3 and MPO-ANCA.52 Back to our patient Our patient’s blood cultures grow methicillin-sensitive Staphylococcus aureus in both sets after 48 hours. Transthoracic echocardiography reveals vegetations around the tricuspid valve, with no evidence of valvular regurgitation. The diagnosis is right-sided infective endocar-ditis. He is started on appropriate antibiotics. The positive ANCA is thought to be related to the infective endocarditis. His vasculitis is most likely secondary to infective endocarditis and not ANCA-associated vasculitis. The ANCA test need not have been requested in the first place. HUMAN LEUKOCYTE ANTIGEN-B27 A 22-year-old man presents to his primary care physician with a 4-month history of gradually worsening low back pain associated with early morning stiffness lasting more than 2 hours. He has no peripheral joint symptoms. In the last 2 years, he has had 2 separate episodes of uveitis. There is a family history of ankylosing spondylitis in his father. Examination reveals global restriction of lumbar movements but is otherwise unremarkable. Magnetic resonance imaging (MRI) of the lumbar spine and sacroiliac joints is normal. Should this patient be tested for human leukocyte antigen-B27 (HLA-B27)? The major histocompatibility complex (MHC) is a gene complex that is present in all animals. It encodes proteins that help with immunologic tolerance. HLA simply refers to the human version of the MHC.53 The HLA gene complex, located on chromosome 6, is categorized into class I, class II, and class III. HLA-B is one of the 3 class I genes. Thus, a positive HLA-B27 result simply means that the particular gene is present in that person. HLA-B27 is strongly associated with anky-losing spondylitis, also known as axial spondy-loarthropathy.54 Other genes also contribute to the pathogenesis of ankylosing spondylitis, but HLA-B27 is present in more than 90% of patients with this disease and is by far considered the most important. The association is not as strong for peripheral spondyloarthropa-thy, with studies reporting a frequency of up to 75% for reactive arthritis and inflammatory bowel disease-associated arthritis, and up to 50% for psoriatic arthritis and uveitis.55 About 9% of healthy, asymptomatic individuals may have HLA-B27, so the mere presence of this gene is not evidence of disease.56 There may be up to a 20-fold increased risk of ankylosing spondylitis among those who are HLA-B27-positive.57 Some HLA genes have many different alleles, each of which is given a number (explaining the number 27 that follows the B). Closely related alleles that differ from one another by only a few amino-acid substitutions are then categorized together, thus accounting for more than 100 subtypes of HLA-B27 (designated from HLA-B*2701 to HLA-B*27106). These subtypes vary in frequency among different racial groups, and the population prevalence of ankylosing spondylitis parallels the frequency of HLA-B27.58 The most common subtype seen in white people and American Indians is B*2705. HLA-B27 is rare in blacks, explaining the rarity of ankylosing spondylitis in this population. Further examples include HLA-B*2704, which is seen in Asians, and HLA-B*2702, seen in Mediterranean populations. Not all subtypes of HLA-B27 are associated with disease, and some, like HLA-B*2706, may also be protective. When should the clinician consider testing for HLA-B27? Not all patients with low back pain need an HLA-B27 test. First, it is important to look for clinical features of axial spondyloarthropathy (Table 7). The unifying feature of spondylo-arthropathy is enthesitis (inflammation at the sites of insertion of tendons or ligaments on the skeleton). Inflammation of axial entheses causes spondylitis and sacroiliitis, manifesting as inflammatory back pain. Clinical clues to inflammatory back pain include insidious onset, aggravation with rest or inactivity, prolonged early morning stiffness, disturbed sleep during the second half of the night, relief with movement or activity, alternating gluteal pain (due to sacroiliitis), and good response to anti-inflammatory medication (although nonspecific). Peripheral spondyloarthropathy may present with arthritis, enthesitis (eg, heel pain due to inflammation at the site of insertion of the Achilles tendon or plantar fascia), or dactylitis (“sausage” swelling of the whole finger or toe due to extension of inflammation beyond the margins of the joint). Other clues may include psoriasis, inflammatory bowel disease, history of preceding gastrointestinal or genitourinary infection, family history of similar conditions, and history of recurrent uveitis. For the initial assessment of patients who have inflammatory back pain, plain radiography of the sacroiliac joints is considered the gold standard.59 If plain radiography does not show evidence of sacroiliitis, MRI of the sacroiliac joints should be considered. While plain radiography can reveal only structural changes such as sclerosis, erosions, and ankylosis, MRI is useful to evaluate for early inflammatory changes such as bone marrow edema. Imaging the lumbar spine is not necessary, as the sac roiliac joints are almost invariably involved in axial spondyloarthropathy, and lesions seldom occur in the lumbar spine in isolation.60 The diagnosis of ankylosing spondylitis previously relied on confirmatory imaging features, but based on the new International Society classification criteria,61–63 which can be applied to patients with more than 3 months of back pain and age of onset of symptoms before age 45, patients can be classified as having 1 of the following: Radiographic axial spondyloarthropathy, if they have evidence of sacroiliitis on imaging plus 1 other feature of spondyloarthropathy Nonradiographic axial spondyloarthropathy, if they have a positive HLA-B27 plus 2 other features of spondyloarthropathy (Table 7). These new criteria have a sensitivity of 82.9% and specificity of 84.4%.62,63 The disease burden of radiographic and nonradiographic axial spondyloarthropathy has been shown to be similar, suggesting that they are part of the same disease spectrum. Thus, the HLA-B27 test is useful to make a diagnosis of axial spondyloarthropathy even in the absence of imaging features and could be requested in patients with 2 or more features of spondyloarthropathy. In the absence of imaging features and a negative HLA-B27 result, however, the patient cannot be classified as having axial spondyloarthropathy. Back to our patient The absence of radiographic evidence would not exclude axial spondyloarthropathy in our patient. The HLA-B27 test is requested because of the inflammatory back pain and the presence of 2 spondyloarthropathy features (uveitis and the family history) and is reported to be positive. His disease is classified as non-radiographic axial spondyloarthropathy. He is started on regular naproxen and is referred to a physiotherapist. After 1 month, he reports significant symptomatic improvement. He asks if he can be retested for HLA-B27 to see if it has become negative. We tell him that there is no point in repeating it, as it is a gene and will not disappear. SUMMARY: CONSIDER THE CLINICAL PICTURE When approaching a patient suspected of having a rheumatologic disease, a clinician should first consider the clinical presentation and the intended purpose of each test. The tests, in general, might serve several purposes. They might help to: Increase the likelihood of the diagnosis in question. For example, a positive rheumatoid factor or anticitrullinated peptide antibody can help diagnose rheumatoid arthritis in a patient with early polyarthritis, a positive HLA-B27 can help diagnose ankylosing spondylitis in patients with inflammatory back pain and normal imaging, and a positive ANCA can help diagnose ANCA-associated vasculitis in a patient with glomerulonephritis. Reduce the likelihood of the diagnosis in question. For example, a negative antinuclear antibody test reduces the likelihood of lupus in a patient with joint pains. Monitor the condition. For example DNA antibodies can be used to monitor the activity of lupus. Plan the treatment strategy. For example, one might consider lifelong anticoagulation if antiphospholipid antibodies are persistently positive in a patient with thrombosis. If the test was requested in the absence of a clear indication and the result is positive, it is important to bear in mind the potential pitfalls associated with that test and not attach a diagnostic label prematurely. None of the tests can confirm or exclude a condition, so the results should always be interpreted in the context of the whole clinical picture. . Guidelines for clinical use of the antinuclear antibody test and tests for specific autoantibodies to nuclear antigens. American College of Pathologists. Arch Pathol Lab Med2000; 124(1):71–81.doi:10.1043/0003-9985(2000)124<0071:GFCUOT>2.0.CO;2 . Lupus anticoagulants are stronger risk factors for thrombosis than anticardiolipin antibodies in the antiphospholipid syndrome: a systematic review of the literature. Blood2003; 101(5):1827–1832.doi:10.1182/blood-2002-02-0441 . The risk of developing ankylosing spondylitis in HLA-B27 positive individuals. A comparison of relatives of spondylitis patients with the general population. Arthritis Rheum1984; 27(3):241–249.pmid:6608352 . The development of Assessment of SpondyloArthritis International Society classification criteria for axial spondyloarthritis (part II): validation and final selection. Ann Rheum Dis2009; 68(6):777–783.doi:10.1136/ard.2009.108233
Hence, a diagnosis of rheumatoid arthritis cannot be confirmed with a positive result alone, nor can it be excluded with a negative result. Anticitrullinated peptide antibody, on the other hand, is much more specific for rheumatoid arthritis (95%), as it is seldom seen in other conditions, but its sensitivity is similar to that of rheumatoid factor (68%).4–6 A positive result would thus lend strength to the diagnosis of rheumatoid arthritis, but a negative result would not exclude it. Approach to early arthritis When faced with a patient with early arthritis, some key questions to ask include7,8: Is this an inflammatory or a mechanical problem? Inflammatory arthritis is suggested by joint swelling that is not due to trauma or bony hypertrophy, early morning stiffness lasting longer than 30 minutes, and elevated inflammatory markers (erythrocyte sedimentation rate or C-reactive protein). Involvement of the small joints of the hands and feet may be suggested by pain on compression of the metacarpophalangeal and metatarsophalangeal joints, respectively. Is there a definite identifiable underlying cause for the inflammatory arthritis? The pattern of development of joint symptoms or the presence of extraarticular symptoms may suggest an underlying problem such as gout, psoriatic arthritis, systemic lupus erythematosus, or sarcoidosis. If the arthritis is undifferentiated (ie, there is no definite identifiable cause), is it likely to remit or persist? This is perhaps the most important question to ask in order to prognosticate.
no
Serology
Can Serological testing determine immunity against COVID-19?
yes_statement
"serological" "testing" can "determine" "immunity" against covid-19.. "serological" "testing" is able to "determine" if someone is "immune" to covid-19.
https://www.nature.com/articles/s41467-021-26774-y
Modeling serological testing to inform relaxation of social distancing ...
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. Subjects Abstract Serological testing remains a passive component of the public health response to the COVID-19 pandemic. Using a transmission model, we examine how serological testing could have enabled seropositive individuals to increase their relative levels of social interaction while offsetting transmission risks. We simulate widespread serological testing in New York City, South Florida, and Washington Puget Sound and assume seropositive individuals partially restore their social contacts. Compared to no intervention, our model suggests that widespread serological testing starting in late 2020 would have averted approximately 3300 deaths in New York City, 1400 deaths in South Florida and 11,000 deaths in Washington State by June 2021. In all sites, serological testing blunted subsequent waves of transmission. Findings demonstrate the potential benefit of widespread serological testing, had it been implemented in the pre-vaccine era, and remain relevant now amid the potential for emergence of new variants. Introduction SARS-CoV-2 emerged in China in late 2019 leading to the COVID-19 pandemic, with over 213 million detected cases and over 4.4 million deaths globally and approximately 38 million detected cases and 643,000 deaths reported in the U.S. as of August 24, 20211. Unprecedented social distancing measures were enacted in early 2020 to reduce transmission and blunt the epidemic peak. In March 2020, U.S. states began to close schools, suspend public gatherings, and encourage employees to work from home if possible. By mid-April, 95% of the U.S.2 and over 30% of the global population were under some form of shelter-in-place order3. Federal social distancing guidelines expired on April 30, 2020; throughout the summer, many state and local governments relaxed stay-at-home orders partially or completely4. Relaxing these social distancing policies resulted in increased community transmission, and case counts increased as states further relaxed restrictions on public gatherings, restaurant dining, and operation of businesses5. Behavioral change combined with the accelerated transmission in a largely immunologically naïve population resulted in a wave of cases and deaths in the late summer and early Fall 2020, a second, larger wave in the Fall, and then a third wave in the Winter of 2020. During late spring 2021, the widespread availability of SARS-CoV-2 vaccines in the United States coupled with higher levels of natural immunity allowed social distancing interventions to be relaxed further. Despite the widespread availability of vaccines, a fourth wave of cases in the US beginning in late Summer 2021 is due to multiple factors, including fatigue from adhering to strict social distancing measures and heterogeneous vaccine coverage. The rise of more transmissible variants of concern6,7 as well as the possibility of variants that escape natural or vaccine-derived immunity8 continues to require vigilance in the event that COVID-19 incidence increases again. Indeed, this fourth wave reinforces the need to evaluate other measures—including individualized policies based on disease or immune status—as part of integrative response campaigns9. In this paper, we explore how immune shielding could be used to further reduce population risk. A shielding strategy aims to identify and deploy recovered/vaccinated (and likely immune) individuals as focal points for sustaining less risky interactions. This strategy has the objective of sustaining interactions necessary for the functioning of essential services while reducing the risk of exposing individuals who remain susceptible to infection. As the basis for a shielding strategy, widespread serological testing programs have the potential to identify individuals or groups who are likely immune, allowing some individuals to return to activities while keeping deaths and hospital admissions at sufficiently low levels. In this strategy, individuals who test positive would preferentially replace susceptible individuals in close-contact interactions, such that more contacts are between susceptible and immune individuals rather than between susceptible and potentially infectious individuals10. Immune shielding may be particularly useful given the high incidence in focal regions resulting from incomplete vaccine coverage and partial levels of population immunity11. Serosurveys of SARS-CoV-2 in the U.S. vary in their estimates of seroprevalence but collectively suggest that infections far outnumber documented cases12,13,14,15,16. To the extent that antibodies serve as a correlate of immunity, serological testing may be used to identify protected individuals17. While our understanding of the immunological response to SARS-CoV-2 infection remains incomplete, the vast majority of infected individuals seroconvert18, with detectable antibody levels persisting at least several months after infection for the majority of individuals19. SARS-CoV-2 reinfections have been documented but remain relatively rare (though emerging variants of concern have raised questions regarding breakthrough infection rates20). Together, these data suggest that recovered individuals have substantial protection against subsequent re-infection. Once identified, antibody test-positive individuals could return to pre-pandemic levels of social interactions and therefore dilute (via shielding) potentially risky interactions between susceptible and infectious individuals10—keeping in mind that variants of concern may require that other NPIs are still utilized (e.g., masking in indoor settings). Such strategies, however, rely on correctly identifying immune individuals. There are currently more than 50 serological assays for the detection of SARS-CoV-2 antibodies that have been authorized for emergency use by the Food and Drug Administration21. The performance of these tests varies considerably21,22,23. For the purpose of informing safe social distancing policies, specificity rather than sensitivity is of primary concern. An imperfectly specific test will result in false positives, leading to individuals being incorrectly classified as immune. If used as a basis to relax social distancing measures, there is concern that this error could heighten the risk for individuals who test positive and lead to an increase in community transmission. For this reason, this paper evaluates the integration of serological testing into a COVID-19 transmission model to evaluate the level of serological testing needed to reduce expected fatalities while increasing the fraction of focal populations who can re-engage in socio-economic activities. Results To evaluate the epidemiological consequences of using mass serological testing to inform the relaxation of social distancing measures in the pre-vaccine era, we modeled transmission dynamics and serological testing for SARS-CoV-2 using a deterministic, compartmental SEIR-like model (Fig. 1). Recovered, susceptible, latently infected, and asymptomatic persons test positive at rates that are functions of testing frequency, sensitivity (for recovered individuals), and specificity (for non-immune individuals). We model contacts at home, work, school, and other locations among three age groups: children and young adults (< 20 years), working adults (20–64 years), and elderly (65+ years). We used a Markov Chain Monte Carlo (MCMC) approach to fit the model to time series of deaths24 and cross-sectional seroprevalence16 data from three U.S. metropolitan areas with distinct COVID-19 epidemic trajectories: the New York City Metro Region, South Florida, and the Washington Puget Sound region, including changes in policy impacting social distancing behaviors, to evaluate an immunological shielding strategy per-region. Fig. 1: Overall model diagram. Serological antibody testing is shown by dashed arrows. Red dashed arrows indicate either false positives (i.e., someone is not immune, but is moved to the test-positive group) and occur at a rate that is a function of 1-specificity, or false negatives (i.e., someone is recovered, but stays in the test-negative group). True positives occur at a rate that is a function of the sensitivity. The hospitalization compartments are located in the “Not tested/test-negative” layer for simplicity, though individuals who incorrectly test positive could move to these compartments after developing a symptomatic infection. Model fits to fatalities and serological data We explored the impacts of social distancing on epidemic outcomes in the absence of serological testing. To do so, we first used MCMC model-data integration to fit the model to reported deaths and seroprevalence point estimates from each of three metropolitan areas. Model fits reproduced reported death trends reasonably well through June 2020 and seroprevalence estimates early in the outbreak (see Fig. 2 for fits; Supplementary Figs. 1–10 for full model diagnostics). Of note, fits were poorer for New York City, which was not unexpected due to the unique severity of the initial pandemic wave there. Fits were moderately good for Washington Puget Sound and best for South Florida. We note that the probability of infection per contact had narrow credible intervals (CrI), indicating posterior confidence in the ability of the model to uniquely identify parameter sets consistent with key features of infection. Credible intervals were wider for the fraction of infections that were symptomatic and were widest for social distancing parameters, indicating limits of parameter identifiability. Nonetheless, the consistency of fits across multiply-independently sampled chains implies that the model outcomes early in the epidemic are insensitive to variation in these parameters—enabling us to evaluate baseline predictions with and without serological testing. Fig. 2: The first row shows the consistency between the fitted model and the deaths/seroprevalence data for New York City, South Florida, and Washington Puget Sound. Daily critical care cases through July 1. The second row shows the cumulative number of recovered (previously infected) individuals. Red squares show the seroprevalence estimates from Havers et al. in each location16. In the third row, the cumulative deaths are shown, with death data shown in blue squares24. Data are presented as mean (black line) ±1.96 sd (gray bands), calculated from 100 random samples. Gray bands show 95% credible intervals, derived from the last 5000 iterations of converged MCMC chains. Epidemic dynamics in the absence of serological testing In all three sites, our models predicted a second epidemic peak in the fall and winter of 2020–2021, consistent with the qualitative shape of the epidemic trajectory (Fig. 3). For New York, the second peak is predicted to be smaller than the first, whereas the second wave is expected to be larger than the first wave in Washington and South Florida. If social distancing was sustained at Fall 2020 levels without any further interventions, our model predicts that 46–55% of the population across the three metropolitan areas (55% in New York City, 95% credible interval (CrI): 27–69%; 46% in South Florida, 95% CrI: 31–60%; and 46% in Washington, 95% CrI: 2–55%) would be infected with SARS-CoV-2 by June 2021, resulting in 72,000 cumulative deaths across the three sites (43,000 deaths in New York City, 95% CrI: 21,000–64,000; 10,000 deaths in South Florida, 95% CrI: 6000–17,000; and 19,000 deaths in Washington, 95% CrI: 1000–32,000) since the start of the pandemic (Fig. 4, top row). In reality, the death count for all three locations was 50,272 (34,492 in New York City, 12,729 in South Florida, 3051 in Washington)—within the 95% CrI in all cases. Dates corresponding to the start of general social distancing in March 2020 and lifting at stay-at-home (SAH) orders in May and June 2020, are based on the dates that policies were enacted, or restrictions lifted, in each location. We assume that schools reopened at 50% capacity on September 1, 2020 in South Florida and October 1, 2020 in Washington and New York. Dotted lines show the impacts of a test with 90% specificity and solid lines show a test with 99.8% specificity. The 99.8% specificity scenario represents the accuracy reported among antibody tests currently authorized for use in the U.S., whereas the 90% specificity scenario is meant to capture reductions in accuracy that might be expected in a mass testing program. The top row shows cumulative deaths by location (panels) by daily testing rate from March 2020 to February 2021 for the scenario with 5:1 shielding, with schools reopening on September 1, 2020 in South Florida and October 1, 2020 in Washington and New York. Colored lines show test specificity. The gray horizontal line shows the number of deaths in the no-testing scenario for each location. The bottom row shows the fraction of the population of each metropolitan area released from social distancing by June 1, 2021, assuming 5:1 shielding. Line colors correspond to testing levels; blue is monthly testing (10 million tests/day) of the U.S. population. Dashed lines show expected results with a highly specific test (specificity = 99.8%) and solid lines show expected results with a test with 90% specificity. The 99.8% specificity scenario represents the accuracy reported among antibody tests currently authorized for use in the U.S., whereas the 90% specificity scenario is meant to capture reductions in accuracy that might result from the implementation of a mass testing program. The 50% specificity level represents a scenario in which an antibody test cannot distinguish between immune and non-immune individuals. Epidemic dynamics with serological shielding Next, we retrospectively assess the benefit of a serological shielding strategy implemented in Fall 2020 in each metropolitan area, assuming that test-positive individuals increase their relative rate of interactions, thereby shielding susceptible individuals and reducing the risk of transmission. Specifically, individuals who test positive return to work and increase other contacts to normal levels. We assume that test-negative and untested individuals continue to work from home if their job allows them to do so. To reflect the placement of test-positive individuals in high-contact roles, we assume that contacts at work and other (non-home, non-school) locations are preferentially with test-positive persons. When shielding interactions are 5:1 relative to that of those under social distancing guidelines, the probability of interacting with a test-positive individual is five times what would be expected given the frequency of test-positive individuals in the population, following the model of fixed shielding described in10. In each site, monthly serological testing of the population leads to a flattened epidemic curve in the fall and winter of 2020–2021. Widespread serological testing combined with moderate serologically-informed shielding (5:1) starting on November 1, 2020, using a highly specific test, could have reduced cumulative deaths by June 2021 by 22% across the three sites combined. The strongest reductions are in Washington (59%, 95% CrI for deaths averted: 0–17,000), with a lower relative impact in New York City (8%, 95% CrI for deaths averted: 300–600) and South Florida (14%, 95% CrI for deaths averted: 900–1300) (Fig. 4, top row). Impacts of serological testing frequency on epidemic outcomes and release from social distancing Simulations of test-based interventions reveal that the magnitude of the benefit from serological shielding depends on the frequency of testing, with more frequent testing resulting in both larger reductions in deaths and in a greater proportion of the population being released from social distancing if a highly specific test is used (Fig. 4, bottom row). In New York, monthly population testing would have been needed to maximize the potential benefit, leading to 51% of the population being released from social distancing by June 1, 2021 (95% CrI: 27–70%) and deaths being reduced by 3000 (95% CrI for total deaths: 20,000–63,000). In contrast, annual population testing would have been expected to release 26% of the population from social distancing (95% CrI: 13–37%) with 1500 deaths averted (95% CrI for total deaths: 20,000–64,000). More frequent testing would have also been beneficial in Washington; monthly testing would have released 21% of the population from social distancing (95% CrI: 3–33%) with 11,000 deaths averted (95% CrI for total deaths: 1000–15,000), compared with only 14% released (95% CrI: 1–22%) and 4000 deaths averted (95% CrI for total deaths: 1000–26,000) with annual testing. In South Florida, 41% (95% CrI: 24–61%) of the population would have been released from social distancing with monthly testing, compared with 21% (95% CrI: 12–32%) with yearly testing. Monthly testing would have averted 1500 deaths in South Florida (95% CrI for deaths: 5000–15,000), whereas annual testing would have averted 500 deaths (95% CrI for deaths: 6000–16,000). While increasing the intensity of social distancing toward the level of restrictions observed in April 2020 could help reduce deaths, these same benefits could be achieved by adding serological testing as part of a control strategy, allowing social distancing to be safely relaxed. As social distancing measures are relaxed further, testing frequency should also increase to minimize deaths and maximize the proportion of the population that can be released (Fig. 5). The extent to which testing frequency must increase to compensate for relaxing social distancing varies by location. For example, in New York City and South Florida, distancing could have been relaxed fully if monthly testing was employed. Fig. 5: Cumulative deaths and number released from distancing by testing level and contact reductions. Contour plot of cumulative deaths in each location from November 1, 2020 to June 1, 2021 (left column) and the number of people released from social distancing (right column) as a function of the degree of relaxation of social distancing and number of tests per day. The far right of the x-axis corresponds to a pre-pandemic level of contact and the far left corresponds to the contact levels in each location during stay-at-home orders in March–June 2020. Both panels assume a test specificity of 99.8% and a shielding factor of 5:1. Impacts of serological testing performance and shielding on epidemic outcomes and release from social distancing The value and safety of a serological testing strategy depend on the level of shielding and test specificity. Thus far, our results centered on dynamics enabled by a high-performance test with a specificity of 99.8%, consistent with the high end of the range of reported specificity of available antibody tests21. We also explored the impact of employing a suboptimal test with 90% specificity, consistent with the lower range of approved tests plus additional decreases in accuracy due to rolling out testing at mass scale. Under this scenario, cumulative deaths across the three locations (66,000) would have been lower than if no-testing strategy was implemented (72,000) but higher than if using a high-performance test (56,000), with 93–99% of the population released from social distancing (New York: 99%, 95% CrI: 95–99%; South Florida: 98%, 95% CrI: 97–100%; Washington: 93%, 95% CrI: 73–97%). However, if monthly testing with a suboptimal assay (90% specificity) was implemented without shielding, 97–99% of the population would have been released from social distancing (99% in New York City, 95% CrI: 95–99%; 98% in South Florida, 95% CrI: 97–100%; and 97% in Washington, 95% CrI: 73–99%) and 78,000 deaths would be expected, more than if no testing were implemented. Overall, adding shielding to a monthly testing strategy results in 10–27% fewer deaths compared to testing at the same frequency without shielding (10% in New York; 14% in South Florida; 27% in Washington). We also set test specificity to 50% to represent a scenario in which antibodies are not a reliable correlate of immunity (i.e., the test is poor at distinguishing between immune and non-immune individuals). If antibodies are not a reliable correlate of protection (which would be counter to current evidence that shows neutralizing antibodies persist for months25) then serological testing could lead to more deaths than if not used at all (Fig. 4, top panel). We conclude that shielding strategies avert deaths with any level of social distancing even when using a moderately specific test (90%) so long as antibodies provide a reasonably good correlate of protection (Fig. 5). As a sensitivity analysis, we also explored how uncertainty in the natural history parameters (latent period, relative transmissibility of asymptomatic infections, hospital length of stay, and duration of symptomatic and asymptomatic infection) altered the impact of testing and shielding. In general, if asymptomatic cases are more able to transmit than we have assumed in our main model, the impact of shielding would be enhanced. In contrast, faster recovery rates for both symptomatic and asymptomatic cases could decrease the ultimate impact of shielding, particularly in South Florida and Washington, where the initial epidemic wave was relatively mild. Changing the latent period and the duration of hospitalization had minimal impact on the results (Supplementary Figs. 12–14). Discussion After achieving reasonably good fits of the model to historical data, our simulation study reveals that sufficiently frequent testing using high-performance tests combined with serological shielding in the pre-vaccine era would have decreased deaths and allowed relaxed social distancing for a substantial fraction of the population. First, we found that maintaining moderate social distancing equivalent to levels in Fall 2021, together with monthly serological testing, could have relieved 21–51% of several U.S. metropolitan populations from social distancing by June 2021. Second, if moderate shielding was employed, a strategy with serological testing would have resulted in up to 16,000 fewer deaths than a strategy without testing in the three focal areas. Adding shielding alongside monthly testing could have further reduced mortality rates and allowed a substantial fraction of the population to return to work and other activities with relative safety without the social and economic costs of strict, prolonged social distancing measures26. Third, we find that such a strategy could in fact prove dangerous, resulting in more deaths, if the serology test is non-specific or if antibodies are not a reliable indicator of immunity. Vaccine passports are already in place in some countries to identify individuals with immunity due to vaccination or recent history of infection27. While our models were fit to data before COVID-19 vaccines were widely available, the principle of serological shielding may still prove useful. For example, serological testing could be used to complement vaccination status, identifying more individuals with immunity for whom social distancing could be more safely relaxed. An aggressive, monthly testing approach is unprecedented but, we argue, may be feasible and warranted under certain scenarios considering the continued social and economic impact of the epidemic. Implementation would require a significant and rapid scale-up of serological testing capacity. In the U.S., this scale-up was achieved for diagnostic PCR testing; the U.S. expanded testing from fewer than 1000 tests per day in early March to nearly 250,000 tests per day in mid-May and about 1 million per day by September. Moreover, recently developed serological tests are quicker to perform than RT-PCR28,29. New York City reported performing a peak of 187,000 tests per week in late March 202030, which corresponds to a rate between the yearly testing and 10% per year testing scenarios we consider in our analysis and could likely be increased with a concerted focus on antibody testing. Highly specific, self-administered bloodspot assays, as well as saliva-based tests31,32, could ease some of the logistical challenges of large-scale testing. Still, there is a legitimate concern that serological tests to relax social distancing could increase population risk33. To the contrary, we show that coupling serological testing using available diagnostic tests with immune shielding can form the basis of a successful risk mitigation strategy. If testing employs the most specific assays available, the false positive proportion would remain low and decrease over time as seroprevalence increases. If test specificity is closer to 90%, the false positive rate could have reached 50% in all sites while remaining lower than the prevalence of positives in the untested population. As such, deploying immune individuals such that they are responsible for more interactions than susceptible individuals will reduce risk. If shielding is not employed, this benefit disappears, and testing can become a liability—reinforcing the critical need to combine serological testing with a shielding strategy. Importantly, false positives are unlikely to substantively impact population-level risk at levels of specificity reported by most authorized serological tests21,34. In this modeling study, there are a number of assumptions and limitations that should be considered. First, while the credible intervals of cumulative mortality from our models overlap with realized outcomes, there remains considerable uncertainty regarding the extent to which individuals continued to practice social distancing through mid-2021, which is a key parameter in our models. This underscores the challenges of predicting the trajectory of the epidemic amidst uncertainty about shifting behavioral patterns. Social distancing was broadly adopted in the initial response in the United States35 but quantifying ever-evolving patterns of social mixing is challenging and little empirical data on behavioral patterns starting in March 2020 is available. Nevertheless, modeling the impact of changing contact patterns on disease transmission is a critical aspect of our model. Second, our models assumed random allocation of serological testing. In practice, targeting testing to specific groups, such as healthcare workers, nursing home care providers, food service employees, or contacts of confirmed or suspected cases might increase efficiency by increasing the test-positive rate (and consequently, cost-effectiveness36), allowing for similar numbers of individuals to be released from social distancing at lower testing levels. This strategy would also decrease the false positive rate, an important consideration if a less specific test is used37. Many healthcare organizations have already begun to offer antibody testing to their employees38. The use of serological testing and shielding within healthcare settings represents a smaller-scale, more targeted application of a testing and shielding strategy39. Third, we have made three critical simplifying assumptions in our model. We assume that antibodies are immediately detectable after resolution of infection. In reality, this generally occurs between 11 and 14 days post infection40. A small fraction of recent infections would be undetected, but this would likely have a minor effect on our results. Next, we assume that immunity lasts for the duration of our simulations, or at least 15 months. Both the duration of antibody protection and the extent to which those antibodies protect against future infections remains unclear. However, the vast majority of individuals who are infected seroconvert18, and ongoing studies of SARS-CoV-2 show that antibodies persist for at least months40. Even as antibodies wane, this does not necessarily imply the loss of immune protection41. Ongoing studies are needed to determine whether these same patterns hold true for newly emerging variants. In addition, we assume that antibodies detected by serology are a correlate of protection. While antibody levels have been shown to wane after several months42,43, especially for individuals with mild infection25,41, protection following natural infection remains substantial44, even when antibodies may be undetectable. Finally, we assume that serology data from Havers et al.16 are representative of the metropolitan areas in which the studies were conducted. In reality, convenience sampling was used in each location, taking advantage of medical visits for other reasons. While these numbers might be biased, the direction of this potential bias is unclear and Havers et al. remains the best serological data available at the time of writing. We also assume that the age-specific case fatality rates are constant over time45. If the actual fatality rate declines over time, this may have led us to overestimate the number of deaths. Even if testing can be scaled up, legal and ethical concerns remain. Requiring evidence of a positive test to return to activities may create strong incentives for individuals to misrepresent their immune status or intentionally infect themselves. However, this is less of a concern amid the widespread availability of vaccines. Nonetheless, a mass testing program must consider how such policies might enforce existing social disparities and guard against inequities in test availability46,47. Moreover, attention must be paid to the potential risk posed by re-infection, which is especially of concern with new variants. We have focused our analysis on serological testing, using the principles of serological shielding to reduce risk of infection for susceptible individuals, but this principle also applies to vaccination48. As of July 2021, three vaccines against SARS-CoV-2 are widely available throughout the United States49 and over 68% of U.S. adults have received at least one dose of vaccine7. However, vaccination uptake varies geographically. In the United States, if a rebound in transmission occurs among unvaccinated individuals as variants of concern begin to become more widespread, vaccinated individuals and/or seropositive individuals could also be preferentially placed in high-contact positions to serve as immune “shields”. This could allow transmission to be more controlled, even as social distancing interventions continue to be relaxed. If a novel strain emerges that escapes vaccine-derived and natural immunity, additional testing could identify individuals who have immunity against the escape variant for shielding to be a viable strategy. This strategy might be particularly beneficial in high-risk settings, such as healthcare or long-term care facilities. A serological testing strategy could be one component of the continued public health response to COVID-19, alongside vaccination, viral testing, masking and contact tracing. Our results show that serological testing coupled with shielding could have mitigated the impacts of the COVID-19 pandemic while also allowing a substantial number of individuals to safely return to social interactions and economic activity, suggesting a future role for serological testing in the ongoing public health response to COVID-19 amid low vaccination coverage and the continuing threat of emergent SARS-CoV-2 variants. Methods We modeled the transmission dynamics of SARS-CoV-2 using a deterministic, compartmental SEIR-like model (Fig. 1). We assume that after a latent period, infected individuals progress to either asymptomatic or symptomatic infection. A fraction of symptomatic cases are hospitalized, with a subset of those requiring critical care. Surviving cases, both asymptomatic and symptomatic, recover and are assumed to be immune to re-infection. All individuals who have not tested positive and are not currently experiencing symptoms of respiratory illness are eligible to be tested and all hospitalized cases are tested prior to discharge. Recovered individuals are moved to the test-positive group at a rate that is a function of test sensitivity. Susceptible, latently infected, and asymptomatic cases may falsely test positive and are moved to the test-positive group at a rate that is a function of test specificity. False positives may become infected, but the inaccuracy of their test result is not recognized unless they develop symptoms that are sufficiently severe to warrant hospitalization and health providers correctly diagnose COVID-19, overriding the history of a positive antibody test. The ordinary differential equations corresponding to this model are included in section SI Appendix, Section S1. All models were run in R (version 3.6.2) using the package deSolve. Translations of the baseline model are available in Matlab and Python. Fitting, estimation, and visualization of fits were implemented in MatLab R2019a and Python version 3.7.3. Code is available at https://github.com/lopmanlab/Serological_Shielding. There are three age groups represented in the model: children and young adults (<20 years), working adults (20–64 years), and elderly (65+ years). We modeled age-specific mixing based on POLYMOD data adapted to the population structure in the United States50,51. Contacts in this survey were reported based on whether they occurred at home, school, work, or another location. All baseline social contact matrices were based on Prem et al.51 and calculated to be symmetric using the symmetric = TRUE option when calling the contact_matrix function in the socialmixr R package (version 0.1.6). General social distancing began on the day that stay-at-home orders were enacted in each location. Although adherence to these measures varied and is generally difficult to measure, we made several assumptions about how these policies changed location-specific contacts. First, we assume that under these measures, all contacts at school were eliminated and that contacts outside of home, work, and school (other) locations were reduced by a fraction, which was fitted for each location. We assume that contacts at home remained unchanged. To address differences in work-based contacts by occupation types, we classified the working adult population into three subgroups based on occupation: (i) those with occupations that enable them to work exclusively from home during social distancing, (ii) those continuing to work but reduced their contacts at work (e.g., customer-facing occupations such as retail), and (iii) those continuing to work with no change in their contact patterns (e.g., frontline healthcare workers). The percent reduction in other contacts and percent contact reductions at work for essential workers who could reduce their contacts was fitted (see next section). This period of intense social distancing lasts until stay-at-home orders are lifted in each location. All three municipalities enacted social distancing regulations in mid-March 202052,53,54. Under these measures, we assume all contacts at school were eliminated and contacts outside of home, work, and school (other) locations were substantially reduced55 while contacts at home remained unchanged, with distancing starting at the time that stay-at-home orders were enacted in each site. After reopening begins, we assume that schools remain closed but that social distancing measures for the general population can be relaxed, by allowing work and other contacts to be increased. In accordance with school reopening policies in each location, we assume that schools remained closed until September 1, 2020 in South Florida and October 1, 2020 in Washington and New York. To represent general relaxation of social distancing, we scale contacts at work and other locations to a proportion of their value under general social distancing based on a scalar constant, c, such that c = 1 is equivalent to the scenario in which social distancing measures as put into place in March are maintained and c = 0 is equivalent to a return to pre-pandemic contact levels for both work and other contacts for essential workers and pre-pandemic contact levels for other contacts for all other groups. Based on local policies, we assume that children returned to school on September 1, 2020 in South Florida and October 1, 2020 in Washington and New York City. To account for the fact that schools have taken a variety of measures to reduce contact among students, we assumed that children halved (50%) their pre-pandemic contacts at school. Nonlinear model-data fitting We fit the model for each location to deaths reported due to COVID-19 from March to July 202024 as well as seroprevalence data16 using a Markov Chain Monte Carlo (MCMC) approach56,57 using the MCMCstat toolbox (https://mjlaine.github.io/mcmcstat/)58,59. Each location was defined using the same counties as were included in a CDC-led seroprevalence study16. For each location, we estimated the six parameters listed in SI Appendix. Model parameter values are shown in SI Appendix, Tables S1 and S2. Reductions in social contacts corresponding to these fitted parameter values are shown in SI Appendix, Table S3. We first performed Latin Hypercube Sampling to generate random parameter sets. Using each set, we ran initial fits and started our MCMC runs from the ten first sets associated with the minimum errors. We ran these ten randomly seeded chains for 100,000 iterations each (95,000 burn-in; 5,000 samples). To infer the initial conditions, we first calculated the number of weeks between the first reported death in each location and the first week where the cumulative death toll exceeded 10. Using region-specific conditions (population demographics, stay-at-home order enactment and lifting dates, and death data), we initialize an epidemic consisting of a single exposed adult, forward simulated until a death count threshold was met, and then the population distribution was used as the initial condition for subsequent intervention scenarios. We used a Poisson likelihood function that included penalty terms for the cumulative midpoint and final number of deaths in each location, the weekly death rates, and population-level seroprevalence estimated for each location from Havers et al.16. We estimated chain convergence using the Gelman–Rubin diagnostic (Supplementary Fig 1). Supplementary Figs. 2, 5, and 8 show the trace plots for each model, and Supplementary Figs. 3, 6, and 9 show the resulting joint distributions of estimated parameters. The consistency between the fitted model and death/seroprevalence data for each location is shown in Supplementary Figs. 4, 7, and 10. After fitting to death data spanning March to July 2020, we use fitted parameters to forward-simulate the epidemic through June 1, 2021 in each location. Given that all ten site-specific chains converged to similar values for each location (indicating good and consistent model fits), we randomly sampled 20 parameter sets in the final 5000 interations in each site for each location to capture uncertainty in model predictions. This resulted in 200 randomly sampled parameter sets for each site. We simulated the epidemic forward using each parameter set for the key testing and shielding interventions we report in the text. We report the middle 95% of the distribution of outcomes from these runs as our credible intervals. As only the fitted parameters were varied, the resulting uncertainty intervals only capture uncertainty in the fitted parameters and not in parameters that were fixed from prior literature. If all parameters had been varied, our credible intervals would likely have been wider. We have uploaded a supplementary file with the number of deaths, critical care cases, cumulative incidence, and the fraction of the population released from social distancing after one year from each of these simulations as Supplementary Material. More details regarding model fitting are shown in SI Appendix Sect. S3. To capture the potential influence of uncertainty in fixed parameters on the impact of shielding, we used Latin Hypercube Sampling to generate 300 random parameter sets based on probable ranges for each parameter, assuming a uniform distribution within each range60. We varied the duration of the latent period from 3 to 12 days61,62, the relative transmissibility of asymptomatic infection (compared to symptomatic infection) from 25 to 100%63,64,65,66, the recovery for non-hospitalized, symptomatic cases from 1 to 10 days67,68 and for asymptomatic cases from 3 to 8 days69, and the length of hospitalization from for severe cases from 6 to 20 days and for non-severe cases from 3 to 9 days70,71. We sampled parameters separately from each distribution and simulated our main shielding scenarios for each. For each parameter, we calculated the partial-rank correlation coefficient between the value of the parameter and the number of deaths in the simulation run with that parameter value. This provides a measure of the effect of the value of each fixed parameter on the impact of shielding in our models. Feehan, D. M. & Mahmud, A. Quantifying interpersonal contact in the United States during the spread of COVID-19: first results from the Berkeley Interpersonal Contact Study. Nat. Commun.12, 893 (2021). Acknowledgements We thank Timothy Lash, Andreas Handel, Carly Adams, Julia Baker, Carol Liu, and Avnika Amin for useful comments on earlier versions of the manuscript. B.A.L. and A.N.M.K. were supported by the Vaccine Impact Modelling Consortium; B.A.L. and K.N.N. were supported by NIH/NICHD R01 HD097175; B.A.L., K.N.N., and A.N.M.K. were supported by NIH/NIGMS R01 GM124280; J.S.W. and D.D. were supported by Simons Foundation (Scope Award ID 329108); B.A.L. was supported by NSF 2032084 and NIH/NIGMS R01GM124280/GM124280-03S1; J.S.W. was supported by the Army Research Office (W911NF-19-1-0384); J.S.W. and C.Y.Z. were supported by National Science Foundation (2032082) and J.S.W. was supported by National Science Foundation (1806606, 1829636). Contributions A.N.M., K.N.N., J.S.W., and B.A.L. designed the study. The model was designed by A.N.M. and K.N.N., extended from an earlier version by J.S.W., D.D., and C.Y.Z. All authors designed model simulations, A.N.M., K.N.N., and C.Y.Z. conducted the analysis with input from D.D., B.A.L., and J.S.W.; D.D. and C.Y.Z. led the model-fitting and A.N.M., K.N.N., and B.A.L. wrote the first draft of the manuscript. All authors contributed to editing the manuscript. J.S.W., D.D., and C.Y.Z. provided critical review of the code, results, and conclusions. Supplementary information Rights and permissions Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
However, this is less of a concern amid the widespread availability of vaccines. Nonetheless, a mass testing program must consider how such policies might enforce existing social disparities and guard against inequities in test availability46,47. Moreover, attention must be paid to the potential risk posed by re-infection, which is especially of concern with new variants. We have focused our analysis on serological testing, using the principles of serological shielding to reduce risk of infection for susceptible individuals, but this principle also applies to vaccination48. As of July 2021, three vaccines against SARS-CoV-2 are widely available throughout the United States49 and over 68% of U.S. adults have received at least one dose of vaccine7. However, vaccination uptake varies geographically. In the United States, if a rebound in transmission occurs among unvaccinated individuals as variants of concern begin to become more widespread, vaccinated individuals and/or seropositive individuals could also be preferentially placed in high-contact positions to serve as immune “shields”. This could allow transmission to be more controlled, even as social distancing interventions continue to be relaxed. If a novel strain emerges that escapes vaccine-derived and natural immunity, additional testing could identify individuals who have immunity against the escape variant for shielding to be a viable strategy. This strategy might be particularly beneficial in high-risk settings, such as healthcare or long-term care facilities. A serological testing strategy could be one component of the continued public health response to COVID-19, alongside vaccination, viral testing, masking and contact tracing.
yes
Serology
Can Serological testing determine immunity against COVID-19?
yes_statement
"serological" "testing" can "determine" "immunity" against covid-19.. "serological" "testing" is able to "determine" if someone is "immune" to covid-19.
https://www.labcorp.com/coronavirus-disease-covid-19/news/labcorp-broadens-availability-covid-19-serological-antibody
Labcorp Broadens Availability of COVID-19 Serological Antibody ...
Labcorp Broadens Availability of COVID-19 Serological Antibody Tests to Hospitals, Healthcare Organizations and Through Its Patient Service Centers Tests Assist in Confirming the Presence of Antibodies to the Virus that Causes COVID-19 BURLINGTON, N.C.--(BUSINESS WIRE)--Apr. 22, 2020-- LabCorp (NYSE: LH), a leading global life sciences company that is deeply integrated in guiding patient care, today announced it will expand serological testing for SARS-CoV-2, the virus that causes COVID-19, to more hospitals and healthcare organizations. The COVID-19 serological tests are in addition to the company’s existing molecular test for COVID-19 that is available nationwide through healthcare providers, and to healthcare workers and emergency responders through its Pixel by LabCorp™ at-home self-collection test kit. Serological tests for SARS-CoV-2 are intended for individuals who may have had COVID-19 symptoms but are no longer symptomatic. The tests determine the presence of antibodies to the virus and can help to identify individuals who have been exposed to the virus. Understanding if an individual has developed antibodies and a potential immune response can be useful in the determination of important decisions such as the ability for hospital staff to care for patients. “LabCorp’s scientists are continuously focused on making novel testing options available to address COVID-19,” said Dr. Brian Caveney, chief medical officer and president of LabCorp Diagnostics. “While results from serological tests are neither the sole basis for a diagnosis nor assurance of immunity, we believe the tests will play a critical role in helping healthcare providers determine appropriate treatment for individuals suspected of having been infected with the virus.” LabCorp began offering serological tests to hospitals and healthcare systems on a limited basis in late March, focusing on high priority healthcare workers. The company has built up capacity to perform over 50,000 serological tests per day and complete those tests within an average of 1 to 3 days from the time the specimen is picked up, assuming adequate supplies. The company is preparing to make the tests more broadly available over the coming weeks for ordering by hospitals and health systems, organizations, and physicians. By mid-May, LabCorp expects to be able to perform several hundred thousand tests per week as more tests and testing platforms receive U.S. Food and Drug Administration (FDA) Emergency Use Authorization (EUA). Serological tests analyze serum in blood samples from individuals who are being evaluated or have been exposed to the virus. The company offers separate tests for each of the three major classes of SARS-CoV-2 antibodies (IgG, IgA, and IgM). Beginning Monday, April 27, physicians will be able to direct asymptomatic patients to LabCorp’s approximately 2,000 patient service centers for specimen collection for SARS-CoV-2 IgG testing. In addition, collection for all three SARS-CoV-2 antibody tests will be available to be performed by LabCorp’s nearly 6,000 phlebotomists located in physician offices and healthcare facilities nationwide. The company will also work with hospitals where it provides laboratory management and technical support services to help them establish serological testing in their on-site laboratories. Updates related to LabCorp’s COVID-19 response are available on LabCorp’s COVID-19 microsite. A positive serologic result indicates that an individual has likely produced an immune response to the SARS-CoV-2 virus. A negative serologic result indicates that an individual has not developed detectable antibodies at the time of testing. While contingent on a variety of factors, this could be due to testing too early in the course of COVID-19, the absence of exposure to the virus, or the lack of an adequate immune response, which can be due to conditions or treatments that suppress immune function. Confirmation of infection with SARS-CoV-2 must be made through a combination of clinical evaluation and other applicable tests. Decisions about ongoing monitoring, treatment or return to normal activities for patients being treated for suspected infection with SARS-CoV-2 should also be made in accordance with guidance from public health authorities. These tests have not been reviewed by the FDA, but are being offered by LabCorp in accordance with the public health emergency guidance issued by the FDA on March 16. About LabCorp LabCorp (NYSE: LH), an S&P 500 company, is a leading global life sciences company that is deeply integrated in guiding patient care, providing comprehensive clinical laboratory and end-to-end drug development services. With a mission to improve health and improve lives, LabCorp delivers world-class diagnostics solutions, brings innovative medicines to patients faster, and uses technology to improve the delivery of care. LabCorp reported revenue of more than $11.5 billion in 2019. Cautionary Statement Regarding Forward-Looking Statements This press release contains forward-looking statements, including but not limited to statements with respect to clinical laboratory testing, the potential benefits of COVID-19 serological testing, our responses to and the expected future impacts of the COVID-19 pandemic, and the opportunities for future growth. Each of the forward-looking statements is subject to change based on various important factors, many of which are beyond the Company’s control, including without limitation, whether our response to the COVID-19 pandemic will prove effective, the impact of the COVID-19 pandemic on our business and financial condition, as well as on general economic, business, and market conditions, competitive actions and other unforeseen changes and general uncertainties in the marketplace, changes in government regulations, including healthcare reform, customer purchasing decisions, including changes in payer regulations or policies, other adverse actions of governmental and third-party payers, the Company’s satisfaction of regulatory and other requirements, patient safety issues, changes in testing guidelines or recommendations, federal, state, and local governmental responses to the COVID-19 pandemic, adverse results in material litigation matters, failure to maintain or develop customer relationships, our ability to develop or acquire new products and adapt to technological changes, failure in information technology, systems or data security, and employee relations. These factors, in some cases, have affected and in the future (together with other factors) could affect the Company’s ability to implement the Company’s business strategy and actual results could differ materially from those suggested by these forward-looking statements. As a result, readers are cautioned not to place undue reliance on any of our forward-looking statements. The Company has no obligation to provide any updates to these forward-looking statements even if its expectations change. All forward-looking statements are expressly qualified in their entirety by this cautionary statement. Further information on potential factors, risks and uncertainties that could affect operating and financial results is included in the Company’s most recent Annual Report on Form 10-K and subsequent Forms 10-Q, including in each case under the heading RISK FACTORS, and in the Company’s other filings with the SEC.
Labcorp Broadens Availability of COVID-19 Serological Antibody Tests to Hospitals, Healthcare Organizations and Through Its Patient Service Centers Tests Assist in Confirming the Presence of Antibodies to the Virus that Causes COVID-19 BURLINGTON, N.C.--(BUSINESS WIRE)--Apr. 22, 2020-- LabCorp (NYSE: LH), a leading global life sciences company that is deeply integrated in guiding patient care, today announced it will expand serological testing for SARS-CoV-2, the virus that causes COVID-19, to more hospitals and healthcare organizations. The COVID-19 serological tests are in addition to the company’s existing molecular test for COVID-19 that is available nationwide through healthcare providers, and to healthcare workers and emergency responders through its Pixel by LabCorp™ at-home self-collection test kit. Serological tests for SARS-CoV-2 are intended for individuals who may have had COVID-19 symptoms but are no longer symptomatic. The tests determine the presence of antibodies to the virus and can help to identify individuals who have been exposed to the virus. Understanding if an individual has developed antibodies and a potential immune response can be useful in the determination of important decisions such as the ability for hospital staff to care for patients. “LabCorp’s scientists are continuously focused on making novel testing options available to address COVID-19,” said Dr. Brian Caveney, chief medical officer and president of LabCorp Diagnostics. “While results from serological tests are neither the sole basis for a diagnosis nor assurance of immunity, we believe the tests will play a critical role in helping healthcare providers determine appropriate treatment for individuals suspected of having been infected with the virus.” LabCorp began offering serological tests to hospitals and healthcare systems on a limited basis in late March, focusing on high priority healthcare workers.
no
Serology
Can Serological testing determine immunity against COVID-19?
yes_statement
"serological" "testing" can "determine" "immunity" against covid-19.. "serological" "testing" is able to "determine" if someone is "immune" to covid-19.
https://www.testing.com/tests/at-home-covid-19-antibody-test/
3 Best At-Home COVID-19 Antibody Tests of 2022 - Testing.com
Test Quick Guide The immune system creates antibodies to specific viruses in response to an infection. At-home tests can look for antibodies to SARS-CoV-2, the coronavirus that causes COVID-19, to help determine if you have previously had COVID-19. Antibody tests require a blood sample and may also be known as serology tests. Although most of these tests are done in a lab or doctor’s office, at-home test collection is possible. With an at-home test kit, you collect a sample of blood and send it to a lab for analysis. About the Test Purpose of the test The purpose of at-home COVID-19 antibody testing is to check whether you may have had a previous infection with SARS-CoV-2. Antibody tests can only indicate potential past infection. They cannot be used to determine if you currently have COVID-19. These tests are most often used in specific circumstances: Testing people with long-lasting symptoms: Symptoms of COVID-19 can persist for many weeks, and at that point, tests for an active infection may come back negative. In these situations, an antibody test can help determine whether symptoms are likely to be related to a past coronavirus infection. Testing people with possible late effects of COVID-19: Certain complications from COVID-19 can develop months after you were initially infected. A positive antibody test can help confirm that you had COVID-19 and may help explain these late-developing effects. Health research: Researchers may use antibody tests to estimate how many people have had COVID-19 or to better understand the immune response to COVID-19. COVID-19 antibody tests cannot be used to prove that you have immunity to COVID-19, and they cannot show whether vaccination was effective. What does the test measure? Antibodies are proteins made by the immune system to help defend against pathogens like viruses. The test looks for specific antibodies produced in response to an infection with SARS-CoV-2. These antibodies are known as immunoglobulins. Types of immunoglobulins include immunoglobulin A (IgA), immunoglobulin M (IgM), and immunoglobulin G (IgG). Each immunoglobulin is produced at a different point after infection, and antibody tests for COVID-19 typically measure IgG because it persists the longest in the blood. When should I get an at-home COVID-19 antibody test? Antibody tests for COVID-19 are most commonly used if you have symptoms that could be related to COVID-19 but don’t have an active infection with SARS-CoV-2. This can occur when symptoms persist for several weeks after you were initially infected or when health concerns arise that could be tied to an infection months earlier. Antibody tests are not recommended if you have signs of a recent infection or have recently tested positive for COVID-19. In addition, you should not use antibody testing to try to determine if you have immune protection against COVID-19 or if you were successfully vaccinated. Antibody test results cannot demonstrate whether you have immunity and are not validated for this purpose. Currently, at-home COVID-19 antibody tests are authorized when they are recommended by a healthcare professional. You can talk with your physician about antibody testing and your options for taking the test at home. Benefits and Downsides of At-Home COVID-19 Antibody Test All of the COVID-19 antibody tests that are currently authorized by the Food and Drug Administration (FDA) involve a blood sample that is analyzed by a laboratory. With at-home testing, the blood sample is taken at home instead of in a doctor’s office or other medical setting. Convenience: You can take your blood sample on your own time and without having to book an appointment or go to a medical office. Simple fingerstick sample: Collecting your blood sample involves only a small prick of the finger, which many people find to be easier and less uncomfortable than a blood draw with a needle. Transparent cost: If you have to pay out-of-pocket, the cost is normally clearly displayed and includes all charges including shipping and analysis of your sample. The potential downsides of at-home COVID-19 antibody tests include: Delay in receiving results: Once you take your sample, it has to be mailed to the laboratory where it can be analyzed. As a result, it may take a few extra business days for you to get the results back from your test. Requires carefully preparing your test sample: When taking a test at home, there may be a higher risk of contaminating the sample or otherwise preparing it incorrectly. While taking a fingerstick is straightforward, it’s important that you carefully follow all of the provided instructions to reduce the chances of an inconclusive or inaccurate test result. Possible out-of-pocket cost: At-home antibody tests may not be covered by your health insurance, and if not, you have to pay the full cost of the test. If you are concerned about cost, check with your insurance provider before getting the at-home test kit. The Best At-Home COVID-19 Antibody At-Home Tests Because choices are limited for at-home COVID-19 antibody tests, a practical approach is to go directly to a medical laboratory. Below are our picks for the best at-home and laboratory COVID-19 antibody tests: The Elicity COVID-19 Antibody Test is our pick for the best overall at-home COVID-19 antibody test. This Elicity product is currently the only FDA-authorized COVID-19 antibody test featuring self-collection of a blood sample. Detailed test-taking instructions are included in the kit. To obtain your sample, prick your disinfected finger with a very small needle, called a lancet, that is provided in the kit. Then lightly press your finger to a special test paper to apply a drop of blood. Once the blood has dried, place the test paper in the provided biohazard bag and seal it in a prepaid return mailer. Elicity has partnered with Symbiotica, a medical laboratory that has CLIA certification for meeting established standards for quality control. When your sample arrives at the lab, it is analyzed for the presence of IgG antibodies to SARS-CoV-2. Test results are typically available 3 to 5 days after the lab receives your sample. You can access your test report by logging into Elicity’s secure online platform. This at-home antibody test is authorized for patients ages 5 and older, although people under 18 should have the fingerstick administered by an adult. Price: $10 service fee, up to $42.13 for testing Type: Laboratory Sample: Blood Results timeline: Within 1 to 3 business days Our pick for the most affordable laboratory test is the Labcorp COVID-19 Antibody Test. With nearly 2,000 patient service centers, Labcorp is one of the nation’s largest medical testing companies. Its CLIA-certified laboratories are frequently used by major health networks and hospitals. COVID-19 antibody testing can be requested through a patient portal on Labcorp’s website. An independent physician from PWNHealth reviews your request and makes a formal order for the test. You can then visit a nearby Labcorp location to have a routine blood draw. Results are normally available within 1 to 3 business days after your blood is drawn. You will receive a report that documents whether any antibodies to COVID-19 were found in your blood sample. Labcorp charges a $10 fee for the physician to order the test. The cost of testing itself can be billed to your health insurance provider or to the federal government. If neither covers your test, you will be invoiced up to $42.13. With the QuestDirect COVID-19 Antibody Test, you prepay a flat rate of $69, plus a $6 physician fee, for testing performed at a local lab. This antibody test from Quest Diagnostics requires a blood sample that is taken with a standard blood draw. The blood draw usually takes just a few minutes, and the entire process is streamlined by paying online before visiting a Quest patient service center. With this test, your blood sample is analyzed for IgG antibodies to the spike protein in the SARS-CoV-2 virus. The analysis is typically completed within 1 to 2 business days after you visit the lab. While Quest Diagnostics offers COVID-19 antibody testing through doctor referrals, the QuestDirect prepayment service allows individuals to order a test directly on the company website. Though the prepayment option is convenient and straightforward, Quest is not able to bill your insurance company for this testing. You should contact your doctor and insurance provider before scheduling a test if you believe that your health care plan covers COVID-19 antibody testing. The QuestDirect option is only meant for people who do not intend to seek reimbursement through a health insurance plan. Interpreting At-Home Test Results The test report will show whether your sample was either positive or negative for antibodies to SARS-CoV-2. A physician should review your test report to help explain its significance in your case. Normally, if antibodies were found, it indicates that you have previously had COVID-19. However, false positive tests can occur and show antibodies even when you haven’t actually had COVID-19. This can happen if the test incorrectly identifies antibodies to other coronaviruses as antibodies to SARS-CoV-2. If no antibodies are found, it means that the test does not show indications of a prior SARS-CoV-2 infection, but this does not definitely prove that you have not had COVID-19. Not everyone produces the same amount of antibodies, and antibody levels can decrease over time. Your doctor can discuss your test results with you and help you understand whether the test is likely to accurately reflect whether you have had COVID-19 in the past. Are test results accurate? At-home antibody tests for COVID-19 are generally able to determine whether or not antibodies are present in your blood sample, but no medical test is perfectly accurate. Issues that can affect the accuracy of serology tests for COVID-19 include: Testing too early: Antibodies usually aren’t found in the blood until a few weeks after a coronavirus infection. For this reason, testing too soon after infection may return a false negative result. Testing too late: While antibodies often last for many months, they may wane to undetectable levels at a faster rate in some people. As a result, testing too long after an infection may lead to a false negative result. Cross-reactivity with other antibodies: Other kinds of coronaviruses may cause the body to produce antibodies that the test may incorrectly determine are antibodies to SARS-CoV-2, causing a false positive test result. Viral mutations: Genetic variants could change proteins in the COVID-19 virus in a way that impacts antibody testing accuracy, potentially leading to false negative results. Prior vaccination: COVID-19 vaccines are designed to cause the body to develop antibodies to the spike protein of SARS-CoV-2, which enables the virus to infect cells. If an antibody test can detect these vaccine-induced antibodies, it may result in a positive serology test even if you have never had COVID-19. If an antibody test is designed to detect antibodies to the nucleocapsid proteins present in a natural COVID-19 infection, then those who are vaccinated but never infected will have a negative serology result. The dynamic and evolving nature of COVID-19 means that it is important to review these factors with a health professional to understand the accuracy and interpretation of an antibody test for a specific situation. Do I need follow-up tests? Whether you need additional testing after a COVID-19 antibody test will depend largely on the test result and the reason why you had the test. Other tests may be needed if you have ongoing health effects related to a prior case of COVID-19. Reviewing your situation with your physician can help identify any relevant tests that may be needed after antibody testing. Questions for your doctor after at-home testing Once you have the results of a COVID-19 antibody test, you can talk with your doctor and review some or all of the following questions: Does my test result mean that I had COVID-19 before? How confident are you in the accuracy of the test result? Should I have any additional tests as a follow-up? At-home antibody tests are only one kind of test related to COVID-19. The following sections discuss how at-home serology tests differ from other types of tests. How are at-home COVID-19 antibody tests different from antibody tests at a doctor’s office? The central difference between a COVID-19 antibody test taken at home and one taken at a medical office is how the sample is prepared. In a medical office, a technician or nurse will handle the process of getting blood that can be analyzed. In this setting, a blood sample may be taken from your vein with a needle or obtained with a fingerstick. For an at-home test, you do the sample collection yourself with a fingerstick that allows you to obtain a drop of blood. With both types of testing, the sample is then sent to a laboratory where it can be analyzed. How are at-home antibody tests different from at-home COVID-19 antigen tests? Antibody tests look for signs of past infection, which makes them different from antigen tests that detect a current infection. Both types of tests have at-home options, but the tests have distinct uses. Antigens and antibodies are involved in the immune response to SARS-CoV-2. Antigens are proteins on the surface of the SARS-CoV-2 virus that allow the immune system to identify the virus. The number of antigens increases soon after infection and then decreases as the amount of the virus in the body declines. In contrast, antibodies take weeks to form and can last for months. For this reason, an antigen test cannot detect a past infection, and an antibody test cannot diagnose a current one. The test sample is also different for each of these tests. Antigen tests are conducted on a sample that is taken from the nose, and antibody tests need a blood sample. How are at-home antibody tests different from at-home COVID-19 PCR tests? PCR tests, including at-home PCR tests, detect an active coronavirus infection. In contrast, antibody tests look for a past infection and cannot diagnose a current case of COVID-19. The test sample for a PCR is taken from the nose, throat, or saliva, which makes it different from an antibody test, which is conducted with a sample of blood. Ask a Laboratory Scientist This form enables patients to ask specific questions about lab tests. Your questions will be answered by a laboratory scientist as part of a voluntary service provided by one of our partners, American Society for Clinical Laboratory Science. Please allow 2-3 business days for an email response from one of the volunteers on the Consumer Information Response Team.
Test Quick Guide The immune system creates antibodies to specific viruses in response to an infection. At-home tests can look for antibodies to SARS-CoV-2, the coronavirus that causes COVID-19, to help determine if you have previously had COVID-19. Antibody tests require a blood sample and may also be known as serology tests. Although most of these tests are done in a lab or doctor’s office, at-home test collection is possible. With an at-home test kit, you collect a sample of blood and send it to a lab for analysis. About the Test Purpose of the test The purpose of at-home COVID-19 antibody testing is to check whether you may have had a previous infection with SARS-CoV-2. Antibody tests can only indicate potential past infection. They cannot be used to determine if you currently have COVID-19. These tests are most often used in specific circumstances: Testing people with long-lasting symptoms: Symptoms of COVID-19 can persist for many weeks, and at that point, tests for an active infection may come back negative. In these situations, an antibody test can help determine whether symptoms are likely to be related to a past coronavirus infection. Testing people with possible late effects of COVID-19: Certain complications from COVID-19 can develop months after you were initially infected. A positive antibody test can help confirm that you had COVID-19 and may help explain these late-developing effects. Health research: Researchers may use antibody tests to estimate how many people have had COVID-19 or to better understand the immune response to COVID-19. COVID-19 antibody tests cannot be used to prove that you have immunity to COVID-19, and they cannot show whether vaccination was effective. What does the test measure? Antibodies are proteins made by the immune system to help defend against pathogens like viruses. The test looks for specific antibodies produced in response to an infection with SARS-CoV-2. These antibodies are known as immunoglobulins.
no
Serology
Can Serological testing determine immunity against COVID-19?
yes_statement
"serological" "testing" can "determine" "immunity" against covid-19.. "serological" "testing" is able to "determine" if someone is "immune" to covid-19.
https://www.csis.org/analysis/which-covid-19-future-will-we-choose
Which Covid-19 Future Will We Choose?
Which Covid-19 Future Will We Choose? Today, the entire world is consumed by the rapid spread of SARS-CoV-2, the virus that causes Covid-19. Several Asian countries, including China, Singapore, Taiwan, Hong Kong, and South Korea, are grappling with containment and mitigation on the tail end of their epidemiologic curves. In much of Europe and North America, countries are contending with accelerated outbreaks with rapid spikes in cases and deaths. Many countries, including the United States, remain several painful weeks away from their apex. Amid this global health crisis, it is important to consider what a post-Covid-19 world might look like and what destructive paths the pandemic might take. This commentary is an initial effort to succinctly capture the major drivers behind the pandemic—both natural and political—and to sketch three possible, broad scenarios for how the pandemic may play out in the United States. Similar trajectories may unfold throughout North America and Europe. The exceptional speed of transmission of the virus is now widely recognized. SARS-CoV-2 spreads primarily through respiratory droplets (e.g., coughing and sneezing), and can be transmitted by people who are infected but not yet showing symptoms. An infected American is understood to infect 2.5 persons on average (the reproductive rate, R0). In order to break transmission, that rate has to be suppressed to below 1.0 through aggressive social distancing, testing, contact tracing, isolation, and quarantine. The case fatality rate of Covid-19—the percentage of infected persons who die —is widely understood to be far higher than seasonal influenza. Between 3-11 percent of the U.S. population gets sick from the flu each season and only 0.1 percent of those die. The coronavirus may infect 40-70 percent of the American population and is likely to have a fatality rate in the range of 1-2 percent. Out of control outbreaks in Italy and Spain are reaching fatality rates closer to 10 percent. What Are the Major Drivers that Inform the Three Scenarios? There are several unknowns that will influence the trajectory of the pandemic, and these major drivers fall into three categories: the virus itself, government tools, and technology. All are to varying degrees shrouded in uncertainty, both scientific and political. The Virus Itself A vital driver is the duration of Covid-19 immunity. This refers to the amount of time a person maintains immunity to SARS-CoV-2 after recovering from infection. At present, we know far too little in this area, as more longitudinal research is required. If theoretically, a person can maintain immunity for a prolonged period (e.g., 12-24 months) post-recovery, they could conceivably safely return to public spaces even as the virus continues to circulate. Eventually, populations could reach herd immunity, where a high enough percentage of the population is immune to the virus that it peters out as it becomes more and more difficult for the virus to find a susceptible host. Inversely, if immunity is very short-lived, a person who has been infected could soon become reinfected. A second is seasonality. Coronaviruses tend to peak in winter months and wane in warmer, more humid weather. Two of the four coronaviruses that cause the common cold abate in warm weather, but SARS and MERS did not have seasonal variations. At this point, it is still unclear whether Covid-19 transmission will slow during the summer and fall of 2020. The high transmissibility and extreme speed of this virus mean it is very likely to circulate the globe quickly and unimpeded. But if we are lucky and SARS-CoV-2 proves to be seasonal, we could use that temporary pause to stockpile medical supplies and build up our testing and laboratory capacities, strengthening our defenses against the successive waves of Covid-19 (winter 2020-2021, winter 2021-2022). A third is viral mutation. Experts believe that SARS-CoV-2 is relatively stable and less prone to regular and rapid mutations. However, much more research is required to confirm this. If true, this would mean that immunity could be more predictable and long-lasting. As all viruses mutate to some degree, subsequent SARS-CoV-2 mutations could have several divergent outcomes. It might mutate and die out; this is what happened with SARS and MERS. It might mutate in response to effective therapeutics, making it more resistant and potentially more persistent and severe. Mutations could also shorten the duration of immunity, as is the case with seasonal influenza. Government Tools The first large set of government tools are non-pharmaceutical interventions (NPIs). These consist of social distancing measures, including school closures, telework requirements for all nonessential businesses, bans of gatherings of groups larger than 10, and “shelter-in-place” orders. These are the only existing mechanisms to stop the spread of the virus. Currently, U.S. federal guidelines encourage that these measures be adopted through April 30, but the federal guidelines do not mandate a national and maximally aggressive lockdown across the country. The current reality is a patchwork of highly variable policies of varying degrees of intensity and flexibility across states, territories, and municipalities. If social distancing measures were implemented with maximum aggressiveness for 8-10 weeks, as seen in China, they could significantly break transmission and slow the spread of the virus, buying the country time to stock hospitals with key medical supplies [e.g., personal protective equipment (PPE), ventilators, intensive care unit (ICU) beds]. This would prevent hospitals from becoming overwhelmed, which would improve health outcomes and lower fatality rates, ultimately flattening the curve and extending the timeline for the outbreak. This would require an unprecedented level of coercion that would likely stir opposition. The second set of tools relates to testing capacity. Singapore, South Korea, Taiwan, and Hong Kong have managed to control their epidemics through early and highly aggressive testing, contact tracing, isolation, and quarantine—without imposing harsh social distancing requirements. Improved and uniform testing capacity underpins every stage of Covid-19 response and recovery. An ideal Covid-19 test would provide results in minutes, rather than days, and would minimize exposure to health care workers. In Denmark, the government plans to distribute self-test kits so that Danish citizens could determine infection status at home and only seek medical care in case of severe illness. In the United States, private companies are playing a more central role in developing new diagnostics; Abbott Laboratories is currently deploying a point-of-care test that can determine if a person is positive in just five minutes. Serological testing for antibodies is another critical tool that measures exposure and can be used to determine who within a given population has developed immunity to Covid-19, even without showing symptoms. Several countries, including China and Germany, have established immunity certification systems such that those who are proven to be immune can safely reenter a school system or a work setting. Expanded testing capacity will need to be complemented by a national, digital disease surveillance system that can track testing and serological data both at the household level and at institutional interfaces where people are coming into contact in public spaces, such as transportation hubs, schools, and workplaces. This level of intensive monitoring of health conditions is expensive, involves more intrusion than Americans are accustomed to, and would raise questions of civil liberties and incite legal challenges. Technology Therapeutics are the first major category of technological drivers. Currently, a number of studies are underway to identify therapeutics that could lower Covid-19 mortality rates and accelerate recovery. An effective therapy—a bridge before a vaccine becomes available—would improve health outcomes and could potentially mitigate the need for medical equipment such as ventilators and ICU beds, freeing these up for the most severe cases and generally lowering the burden on hospitals and health workers. It is possible that a safe and effective therapy could be developed within the next six months. The most crucial technological factor is vaccine production. A safe and effective vaccine is the only intervention that can definitively end the coronavirus pandemic. The earliest a proven vaccine could be distributed is likely the fall of 2021, though recent reports suggest a vaccine might become available in early 2021. It is wholly possible, however, that it will take much longer than that—even 3, 5, or 10 years—to demonstrate both vaccine efficacy and safety. Aggressive efforts to expedite vaccine financing, manufacturing, and distribution could shorten the timeline for ending the pandemic. Possible Scenarios Scenario 1: Best Case – Rapid Recovery In the next 2-3-month period, the United States implements highly aggressive national social distancing measures and the nationally coordinated delivery of key medical supplies to major hot spots. Testing is widely expanded, allowing for more targeted, localized responses and the gradual easing of extreme social distancing throughout the country following the initial 2-3-month period of intense restrictions. Major urban centers do not become wildfires. Seasonality provides a reprieve in the summers of 2020 and beyond, during which health systems can recover and prepare for subsequent waves. Therapeutics that successfully treat Covid-19 are discovered and scaled such that the burden of Covid-19 patients on the health care system is lowered substantially, increasing survival rates and ameliorating illness. Following the most severe, first wave of the virus (winter and spring 2020), total deaths in the United States do not exceed 240,000. Covid-19 immunity lasts a long time, slowing the spread of the virus, and widespread accelerated serological antibody testing allows for those who are immune to return to work, potentially alongside low-risk populations (e.g., children returning to schools). The economic stimulus packages succeed in keeping the U.S. economy warm, enabling a rapid economic recovery when the epidemic subsides and helping to maintain social stability throughout. The United States opens up its borders and resumes trade and travel with much of the rest of the world while closely managing the flow of migrants from areas that remain deeply impacted. Impacted areas of Asia and Europe recover in tandem with the United States. Africa and many vulnerable low- and middle-income countries struggle with the pandemic, but the continued spread from those countries is manageable until a vaccine arrives. The world rapidly produces and equitably distributes a safe and effective vaccine in 2021, early enough to partially staunch the second wave of the outbreak (winter 2020-2021) and more fully combat the third wave (winter 2021-2022). That progress further reduces the risk of reimportation of the virus from other countries. The virus mutates and fizzles out, and eventually, the Covid-19 threat subsides completely. Scenario 2: Mixed Case – Roller Coaster In the spring of 2020, six weeks of a fragmented, chaotic federal response—no national lockdown, no national coordination of critical medical supplies, and no national testing system—squanders valuable time, opening the way for multiple wildfire outbreaks in urban centers throughout the country (e.g., Detroit, New Orleans, Chicago, Miami, Boston, Washington D.C., Dallas, and Atlanta).Delayed implementation combined with the premature relaxation of social distancing measures to reignite outbreaks across the country, resulting in a stop-go-stop-go, roller coaster pattern. The lag in the provision of PPE to hot spots throughout the United States leaves medical workers vulnerable, causing a rise of infections and deaths among health care workers, further straining health systems and compromising the response. The rapid rise in cases temporarily overwhelms hospitals in major urban hot spots, leading to a significant spike in deaths throughout the spring and summer of 2020. The total number of Covid-19 deaths in the United States range between 500,000 and 1 million. Following this mid-2020 shock, a very late national lockdown is attempted, combined with efforts to nationally coordinate the delivery of critical medical supplies and to accelerate testing and contact tracing on a national scale. The seasonal drop in cases is marginal in the summer of 2020, providing little reprieve. By winter 2020-2021, the United States slowly recovers and is able to “flatten the curve,” but only after protracted months of high death rates and multiple delays and false stops. Efforts to regain control over large urban wildfires are slow to achieve success. Intense social distancing is ultimately required for months rather than weeks. Efforts to ease these measures often backfire, requiring a repeated retightening during the second wave (winter 2020-2021). That pattern, in turn, sustains the national economic crisis. The delayed deployment of a national testing system means that serological tests do not become available until late in 2020, and only then can those who are proven to be immune begin reentering the workforce. Accelerated outbreaks persist in Europe and among low- and middle-income countries. The U.S. government is forced to be highly selective in the reopening of its border and reinitiation of travel and trade. Additional emergency economic funding measures are passed, but economic fatigue intensifies, and these packages show diminishing returns. The federal government is forced to play a larger role, assuming ownership over wide swaths of the economy. As economic dislocation worsens, social unrest and violent instability rise. Efforts to develop effective therapies take longer than hoped, with the result that antimalarials and plasma transfusions are not available to most of the country until the second wave (winter 2020-2021). A safe and effective vaccine is only available in time for the third wave (winter 2021-2022). Scenario 3: Worst Case – Decline and Catastrophe Social distancing measures are implemented and enforced in a fragmented, ineffectual manner across the United States. The federal government fails altogether to deploy a national testing and contact tracing system and to coordinate the delivery of critical medical supplies to the urban hot spots. Chronic shortages of PPE persist as the pandemic worsens globally, causing demand to surge while supply remains low. This leads to exceptionally high, sustained infection and death rates among health care workers, imposing deep and lasting damage to the health system and crippling the national response for several months. No effective therapies are discovered, health systems become overwhelmed as the virus continues to spread, and the supply of critical medical equipment (e.g., PPE, ventilators, and ICU beds) fails to keep up with demand. Overwhelmed hospitals fail, worsening health outcomes for both Covid-19 patients and for other hospital patients (e.g., heart attack, cancer, stroke, car accidents) and increasing death rates across the board. The total number of Covid-19 deaths in the United States ranges between 1.5 and 2.2 million by the end of 2021. Large segments of the world are unable to control the virus for extended periods, and the United States remains vulnerable to the reintroduction of the virus, forcing the U.S. government to keep its borders firmly shut and barriers to trade and travel high. A safe and effective vaccine remains elusive, 5-10 years distant. Natural immunity does not last a long time, making those who recover susceptible to reinfection and impeding safe return to work and schools across large portions of the country. The stimulus packages are insufficient to avert deep and lasting damage to the economy. Entire sectors of the U.S. economy are nationalized. As death rates rise and the economic crisis deepens, widespread, violent disorder intensifies, requiring a significant deployment of the U.S. military. The Time to Act Is Now Pandemics change history by transforming populations, states, societies, economies, norms, and governing structures. Political choices matter profoundly in determining outcomes. We know what is needed: Early, fast, aggressive action. A shutdown that is as universal as possible for four to eight weeks. A centralized command structure that rationalizes the marketplace and supports states in securing critical medical supplies. A national Covid-19 surveillance system that coordinates expansive testing and contact tracing at the local level. Investment in new therapeutics and vaccines. Economic measures that provide a social safety net. The time to act is now. J. Stephen Morrison is senior vice president and director of the Global Health Policy Center at the Center for Strategic and International Studies (CSIS) in Washington, D.C. Anna Carroll is an associate fellow with the Global Health Policy Center. Commentary is produced by the Center for Strategic and International Studies (CSIS), a private, tax-exempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all views, positions, and conclusions expressed in this publication should be understood to be solely those of the author(s).
In Denmark, the government plans to distribute self-test kits so that Danish citizens could determine infection status at home and only seek medical care in case of severe illness. In the United States, private companies are playing a more central role in developing new diagnostics; Abbott Laboratories is currently deploying a point-of-care test that can determine if a person is positive in just five minutes. Serological testing for antibodies is another critical tool that measures exposure and can be used to determine who within a given population has developed immunity to Covid-19, even without showing symptoms. Several countries, including China and Germany, have established immunity certification systems such that those who are proven to be immune can safely reenter a school system or a work setting. Expanded testing capacity will need to be complemented by a national, digital disease surveillance system that can track testing and serological data both at the household level and at institutional interfaces where people are coming into contact in public spaces, such as transportation hubs, schools, and workplaces. This level of intensive monitoring of health conditions is expensive, involves more intrusion than Americans are accustomed to, and would raise questions of civil liberties and incite legal challenges. Technology Therapeutics are the first major category of technological drivers. Currently, a number of studies are underway to identify therapeutics that could lower Covid-19 mortality rates and accelerate recovery. An effective therapy—a bridge before a vaccine becomes available—would improve health outcomes and could potentially mitigate the need for medical equipment such as ventilators and ICU beds, freeing these up for the most severe cases and generally lowering the burden on hospitals and health workers. It is possible that a safe and effective therapy could be developed within the next six months. The most crucial technological factor is vaccine production. A safe and effective vaccine is the only intervention that can definitively end the coronavirus pandemic. The earliest a proven vaccine could be distributed is likely the fall of 2021, though recent reports suggest a vaccine might become available in early 2021. It is wholly possible, however,
yes
Serology
Can Serological testing determine immunity against COVID-19?
yes_statement
"serological" "testing" can "determine" "immunity" against covid-19.. "serological" "testing" is able to "determine" if someone is "immune" to covid-19.
https://www.technologyreview.com/2020/04/09/998974/immunity-passports-cornavirus-antibody-test-outside/
Why it's too early to start giving out “immunity passports” | MIT ...
Why it’s too early to start giving out “immunity passports” Imagine, a few weeks or months from now, having a covid-19 test kit sent to your home. It’s small and portable, but pretty easy to figure out. You prick your finger as in a blood sugar test for diabetics, wait maybe 15 minutes, and bam—you now know whether or not you’re immune to coronavirus. If you are, you can request government-issued documentation that says so. This is your “immunity passport.” You are now free to leave your home, go back to work, and take part in all facets of normal life—many of which are in the process of being booted back up by “immunes” like yourself. Pretty enticing, right? Some countries are taking the idea seriously. German researcherswant to send out hundreds of thousands of tests to citizens over the next few weeks to see who is immune to covid-19 and who is not, and certify people as being healthy enough to return to society. The UK, which has stockpiled over 17.5 million home antibody testing kits, has raised the prospect of doing something similar, although this has come undermajor scrutiny from scientists who have raised concerns that the test may not be accurate enough to be useful. As the pressure builds from a public that has been cooped up for weeks, more countries are looking for a way out of strict social distancing measures that doesn’t require waiting 12 to 18 months for a vaccine (if one even comes). So how does immunity testing work? Very soon after infection by SARS-CoV-2, polymerase chain reaction (PCR) tests can be used to look for evidence of the virus in the respiratory tract. These tests work by greatly amplifying viral genetic material so we can verify what virus it comes from. But weeks or months after the immune system has fought the virus off, it’s better to test for antibodies. See also: About six to 10 days after viral exposure, the body begins to develop antibodies that bind and react specifically to the proteins found on SARS-CoV-2. The first antibody produced is called immunoglobulin m (IgM), which is short-lived and only stays in the bloodstream for a few weeks. The immune system refines the antibodies and just a few days later will start producing immunoglobulins G (IgG) and A (IgA), which are much more specific. IgG stays in the blood and can confer immunity for months, years, or a lifetime, depending on the disease it’s protecting against. In someone who has survived infection with covid-19, the blood should, presumably, possess these antibodies, which will then protect against subsequent infection by the SARS-CoV-2 virus. Knowing whether someone is immune (and eligible for potential future certification) hinges on serological testing, drawing blood to look for signs of these antibodies. Get a positive test and, in theory, that person is now safe to walk the street again and get the economy moving. Simple. Except it’s not. There are some serious problems with trying to use the tests to determine immunity status. For example, we still know very little about what human immunity to the disease looks like, how long it lasts, whether an immune response prevents reinfection, and whether you might still be contagious even after symptoms have dissipated and you’ve developed IgG antibodies. Immune responses vary greatly between patients, and we still don’t know why. Genetics could play a role. “We’ve only known about this virus for four months,” says Donald Thea, a professor of global health at Boston University. “There’s a real paucity of data out there.” SARS-CoV-1, the virus that causes SARS and whose genome is about 76% similar to that of SARS-CoV-2, seems to elicit an immunity that lasts up to three years. Other coronaviruses that cause the common cold seem to elicit a far shorter immunity, although the data on that is limited—perhaps, says Thea, because there has been far less urgency to study them in such detail. It’s too early to tell right now where SARS-CoV-2 will fall in that time range. Even without that data, dozens of groups in the US and around the world are developing covid-19 tests for antibodies. Many of these are rapid tests that can be taken at the point of care or even at home, and deliver results in just a matter of minutes. One US company, Scanwell Health, has licensed a covid-19 antibody test from the Chinese company Innovita that can look for SARS-CoV-2 IgM and IgG antibodies through just a finger-prick blood sample and give results in 13 minutes. There are two key criteria we look for when we’re evaluating the accuracy of an antibody test. One is sensitivity, the ability to detect what it’s supposed to detect (in this case antibodies). The other is specificity, the ability to detect the particular antibodies it is looking for. Scanwell’s chief medical officer, Jack Jeng, says clinical trials in China showed that the Innovita test achieved 87.3% sensitivity and 100% specificity (these results are unpublished). That means it will not target the wrong kind of antibodies and won’t deliver any false positives (people incorrectly deemed immune), but it will not be able to tag any antibodies in 12.7% of all the samples it analyzes—those samples would come up as false negatives (people incorrectly deemed not immune). By comparison, Cellex, which is the first company to get a rapid covid-19 antibody test approved by the FDA, has a sensitivity of 93.8% and a specificity of 95.6%. Others are also trumpeting their own tests’ vital stats. Jacky Zhang, chairman and CEO of Beroni Group, says his company’s antibody test has a sensitivity of 88.57% and a specificity of 100%, for example. Allan Barbieri of Biomerica says his company’s test is over 90% sensitive. The Mayo Clinic is making available its own covid-19 serological test to look for IgG antibodies, which Elitza Theel, the clinic’s director of clinical microbiology, says has 95% specificity. The specificity and sensitivity rates work a bit like opposing dials. Increased sensitivity can reduce specificity by a bit, because the test is better able to react with any antibodies in the sample, even ones you aren’t trying to look for. Increasing specificity can lower sensitivity, because the slightest differences in the molecular structure of the antibodies (which is normal) could prevent the test from finding those targets. “It really depends on what your purpose is,” says Robert Garry, a virologist at Tulane University. Sensitivity and specificity rates of 95% or higher, he says, are considered a high benchmark, but those numbers are difficult to hit; 90% is considered clinically useful, and 80 to 85% is epidemiologically useful. Higher rates are difficult to achieve for home testing kits. But the truth is, a test that is 95% accurate isn’t much use at all. Even the smallest errors can blow up over a large population. Let’s say coronavirus has infected 5% of the population. If you test a million people at random, you ought to find 50,000 positive results and 950,000 negative results. But if the test is 95% sensitive and specific, it will correctly identify only 47,500 positive results and 902,500 negative results. That leaves 50,000 people who have a false result. That’s 2,500 people who are actually positive—immune—but are not getting an immunity passport and must stay home. That’s bad enough. But even worse is that a whopping 47,500 people who are actually negative—not immune—could incorrectly test positive. Half of the 95,000 people who are told they are immune and free to go about their business might never have been infected yet. Because we don’t know what the real infection rate is—1%, 3%, 5%, etc.—we don’t know how to truly predict what proportion of the immunity passports would be issued incorrectly. The lower the infection rate, the more devastating the effects of the antibody tests’ inaccuracies. The higher the infection rate, the more confident we can be that a positive result is real. And people with false positive results would unwittingly be walking hazards who could become infected and spread the virus, whether they developed symptoms or not. A certification system would have to test people repeatedly for several weeks before they could be issued a passport to return to work—and even then, this would only reduce the risk, not eliminate it outright. As mentioned, cross-reactivity with other antibodies, especially ones that target other coronaviruses, is another concern. “There are six different coronaviruses known to infect humans,” says Thea. “And it’s entirely possible if you got a garden-variety coronavirus infection in November, and you did not get covid-19, you could still test positive for the SARS-CoV-2 antibodies.” Lee Gehrke, a virologist and biotechnology researcher at Harvard and MIT, whose company E25Bio is also developing serological tests for covid-19, raises another issue. “It's not yet immediately clear,” he says, “that the antibodies these tests pick up are neutralizing.” In other words, the antibodies detected in the test may not necessarily act against the virus to stop it and protect the body—they simply react to it, probably to tag the pathogen for destruction by other parts of the immune system. Gehrke says he favors starting with a smaller-scale, in-depth study of serum samples from confirmed patients that defines more closely what the neutralizing antibodies are. This would be an arduous trial, “but I think it would be much more reassuring to have this done in the US before we take serological testing to massive scale,” he says. Alan Wells, the medical director of clinical laboratories at the University of Pittsburgh Medical Center, raises a similar point. He says that some patients who survive infection and are immune may simply not generate the antibodies you’re looking for. Or they may generate them at low levels that do not actually confer immunity, as some Chinese researchers claim to have found. “I would shudder to use IgM and IgG testing to figure out who’s immune and who’s not,” says Wells. “These tests are not ready for that.” Even if the technology is more accurate, it might still simply be too early to start certifying immunity just to open up the economy. Chris Murray from the University of Washington’s Institute for Health Metrics and Evaluation told NPR his group’s models predict that come June, “at least 95% of the US will still be susceptible to the virus,” leaving them vulnerable to infection by the time a possible second wave comes around in the winter. Granting immunity passports to less than 5% of the workforce may not be all that worthwhile. Theel says that instead of being used to issue individual immunity passports, serology tests could be deployed en masse, over a long period of time, to see if herd immunity has set in—lifting or easing restrictions wholesale after 60 to 70% of a community’s population tests positive for immunity. There are a few case studies that hold promise. San Miguel County in Colorado has partnered with biotech company United Biomedical in an attempt to serologically test everyone in the county. The community is small and isolated, and therefore easier to test comprehensively. Iceland has been doing the same thing across the country. This would require a massively organized effort to pull off well in highly populated areas, and it’s not clear whether the decentralized American health-care system could do it. But it’s probably worth thinking about if we hope to reopen whole economies, and not just give a few individuals a get-out-of-jail-free card. Not everyone is so skeptical about using serological testing on a case-by-case basis. Thea thinks the data right now suggests SARS-CoV-2 should behave like its close cousin SARS-CoV-1, resulting in an immunity that lasts for a maybe a couple of years. “With that in mind, it’s not unreasonable to identify individuals who are immune from reinfection,” he says. “We can have our cake and eat it too. We can begin to repopulate the workforce—most importantly the health-care workers.” For instance, in hard-hit cities like New York that are suffering from a shortage of health-care workers, a serological test could help nurses and doctors figure out who might be immune, and therefore better equipped to work in the ICU or conduct procedures that put them at a high risk of exposure to the virus, until a vaccine comes along. And at the very least, serological testing is potentially useful because many covid-19 cases present, at most, only mild symptoms that don’t require any kind of medical intervention. About 18% of infected passengers on the Diamond Princess cruise shipshowed no symptoms whatsoever, suggesting there may be a huge number of asymptomatic cases. These people almost certainly aren’t being tested (CDC guidelines for covid-19 testing specifically exclude those without symptoms). But their bodies are still producing antibodies that should be detectable long after the infection is cleared. If they develop immunity to covid-19 that’s provable, then in theory, they could freely leave the house once again. For now, however, there are too many problems and unknowns to use antibody testing to decide who gets an immunity passport and who doesn’t. Countries now considering it might find out they will either have to accept enormous risks or simply sit tight for longer than initially hoped. Correction: The initial version of the story incorrectly stated: “The higher the infection rate, the more devastating the effects of the antibody tests’ inaccuracies.” A higher infection would actually produce more confident antibody test results. We regret the error. Get the latest updates from MIT Technology Review We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive. The latest iteration of a legacy Founded at the Massachusetts Institute of Technology in 1899, MIT Technology Review is a world-renowned, independent media company whose insight, analysis, reviews, interviews and live events explain the newest technologies and their commercial, social and political impact. Advertise with MIT Technology Review Elevate your brand to the forefront of conversation around emerging technologies that are radically transforming business. From event sponsorships to custom content to visually arresting video storytelling, advertising with MIT Technology Review creates opportunities for your brand to resonate with an unmatched audience of technology and business elite.
United Biomedical in an attempt to serologically test everyone in the county. The community is small and isolated, and therefore easier to test comprehensively. Iceland has been doing the same thing across the country. This would require a massively organized effort to pull off well in highly populated areas, and it’s not clear whether the decentralized American health-care system could do it. But it’s probably worth thinking about if we hope to reopen whole economies, and not just give a few individuals a get-out-of-jail-free card. Not everyone is so skeptical about using serological testing on a case-by-case basis. Thea thinks the data right now suggests SARS-CoV-2 should behave like its close cousin SARS-CoV-1, resulting in an immunity that lasts for a maybe a couple of years. “With that in mind, it’s not unreasonable to identify individuals who are immune from reinfection,” he says. “We can have our cake and eat it too. We can begin to repopulate the workforce—most importantly the health-care workers.” For instance, in hard-hit cities like New York that are suffering from a shortage of health-care workers, a serological test could help nurses and doctors figure out who might be immune, and therefore better equipped to work in the ICU or conduct procedures that put them at a high risk of exposure to the virus, until a vaccine comes along. And at the very least, serological testing is potentially useful because many covid-19 cases present, at most, only mild symptoms that don’t require any kind of medical intervention. About 18% of infected passengers on the Diamond Princess cruise shipshowed no symptoms whatsoever, suggesting there may be a huge number of asymptomatic cases. These people almost certainly aren’t being tested (CDC guidelines for covid-19 testing specifically exclude those without symptoms). But their bodies are still producing antibodies that should be detectable long after the infection is cleared. If they develop immunity to covid-19 that’s provable, then in theory, they could freely leave the house once again.
yes