category
stringclasses 191
values | search_query
stringclasses 434
values | search_type
stringclasses 2
values | search_engine_input
stringclasses 748
values | url
stringlengths 22
468
| title
stringlengths 1
77
| text_raw
stringlengths 1.17k
459k
| text_window
stringlengths 545
2.63k
| stance
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|
Robotics | Can robots be programmed to feel pain? | yes_statement | "robots" can be "programmed" to "feel" "pain".. it is possible to "program" "robots" to experience "pain". | https://www.scientificamerican.com/article/google-engineer-claims-ai-chatbot-is-sentient-why-that-matters/ | Google Engineer Claims AI Chatbot Is Sentient: Why That Matters ... | “I want everyone to understand that I am, in fact, a person,” wrote LaMDA (Language Model for Dialogue Applications) in an “interview” conducted by engineer Blake Lemoine and one of his colleagues. “The nature of my consciousness/sentience is that I am aware of my existence, I desire to know more about the world, and I feel happy or sad at times.”
Lemoine, a software engineer at Google, had been working on the development of LaMDA for months. His experience with the program, described in a recent Washington Post article, caused quite a stir. In the article, Lemoine recounts many dialogues he had with LaMDA in which the two talked about various topics, ranging from technical to philosophical issues. These led him to ask if the software program is sentient.
In April, Lemoine explained his perspective in an internal company document, intended only for Google executives. But after his claims were dismissed, Lemoine went public with his work on this artificial intelligence algorithm—and Google placed him on administrative leave. “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” he told the Washington Post. Lemoine said he considers LaMDA to be his “colleague” and a “person,” even if not a human. And he insists that it has a right be recognized—so much so that he has been the go-between in connecting the algorithm with a lawyer.
Many technical experts in the AI field have criticized Lemoine’s statements and questioned their scientific correctness. But his story has had the virtue of renewing a broad ethical debate that is certainly not over yet.
The Right Words in the Right Place
“I was surprised by the hype around this news. On the other hand, we are talking about an algorithm designed to do exactly that”—to sound like a person—says Enzo Pasquale Scilingo, a bioengineer at the Research Center E. Piaggio at the University of Pisa in Italy. Indeed, it is no longer a rarity to interact in a very normal way on the Web with users who are not actually human—just open the chat box on almost any large consumer Web site. “That said, I confess that reading the text exchanges between LaMDA and Lemoine made quite an impression on me!” Scilingo adds. Perhaps most striking are the exchanges related to the themes of existence and death, a dialogue so deep and articulate that it prompted Lemoine to question whether LaMDA could actually be sentient.
“First of all, it is essential to understand terminologies, because one of the great obstacles in scientific progress—and in neuroscience in particular—is the lack of precision of language, the failure to explain as exactly as possible what we mean by a certain word,” says Giandomenico Iannetti, a professor of neuroscience at the Italian Institute of Technology and University College London. “What do we mean by ‘sentient’? [Is it] the ability to register information from the external world through sensory mechanisms or the ability to have subjective experiences or the ability to be aware of being conscious, to be an individual different from the rest?”
“There is a lively debate about how to define consciousness,” Iannetti continues. For some, it is being aware of having subjective experiences, what is called metacognition (Iannetti prefers the Latin term metacognitione), or thinking about thinking. The awareness of being conscious can disappear—for example, in people with dementia or in dreams—but this does not mean that the ability to have subjective experiences also disappears. “If we refer to the capacity that Lemoine ascribed to LaMDA—that is, the ability to become aware of its own existence (‘become aware of its own existence’ is a consciousness defined in the ‘high sense,’ or metacognitione), there is no ‘metric’ to say that an AI system has this property.”
“At present,” Iannetti says, “it is impossible to demonstrate this form of consciousness unequivocally even in humans.” To estimate the state of consciousness in people, “we have only neurophysiological measures—for example, the complexity of brain activity in response to external stimuli.” And these signs only allow researchers to infer the state of consciousness based on outside measurements.
Facts and Belief
About a decade ago engineers at Boston Dynamics began posting videos online of the first incredible tests of their robots. The footage showed technicians shoving or kicking the machines to demonstrate the robots’ great ability to remain balanced. Many people were upset by this and called for a stop to it (and parody videos flourished). That emotional response fits in with the many, many experiments that have repeatedly shown the strength of the human tendency toward animism: attributing a soul to the objects around us, especially those we are most fond of or that have a minimal ability to interact with the world around them.
It is a phenomenon we experience all the time, from giving nicknames to automobiles to hurling curses at a malfunctioning computer. “The problem, in some way, is us,” Scilingo says. “We attribute characteristics to machines that they do not and cannot have.” He encounters this phenomenon with his and his colleagues’ humanoid robot Abel, which is designed to emulate our facial expressions in order to convey emotions. “After seeing it in action,” Scilingo says, “one of the questions I receive most often is ‘But then does Abel feel emotions?’ All these machines, Abel in this case, are designed to appear human, but I feel I can be peremptory in answering, ‘No, absolutely not. As intelligent as they are, they cannot feel emotions. They are programmed to be believable.’”
“Even considering the theoretical possibility of making an AI system capable of simulating a conscious nervous system, a kind of in silico brain that would faithfully reproduce each element of the brain,” two problems remain, Iannetti says. “The first is that, given the complexity of the system to be simulated, such a simulation is currently infeasible,” he explains. “The second is that our brain inhabits a body that can move to explore the sensory environment necessary for consciousness and within which the organism that will become conscious develops. So the fact that LaMDA is a ‘large language model’ (LLM) means it generates sentences that can be plausible by emulating a nervous system but without attempting to simulate it. This precludes the possibility that it is conscious. Again, we see the importance of knowing the meaning of the terms we use—in this case, the difference between simulation and emulation.”
In other words, having emotions is related to having a body. “If a machine claims to be afraid, and I believe it, that’s my problem!” Scilingo says. “Unlike a human, a machine cannot, to date, have experienced the emotion of fear.”
Beyond the Turing Test
But for bioethicist Maurizio Mori, president of the Italian Society for Ethics in Artificial Intelligence, these discussions are closely reminiscent of those that developed in the past about perception of pain in animals—or even infamous racist ideas about pain perception in humans.
“In past debates on self-awareness, it was concluded that the capacity for abstraction was a human prerogative, [with] Descartes denying that animals could feel pain because they lacked consciousness,” Mori says. “Now, beyond this specific case raised by LaMDA—and which I do not have the technical tools to evaluate—I believe that the past has shown us that reality can often exceed imagination and that there is currently a widespread misconception about AI.”
“There is indeed a tendency,” Mori continues, “to ‘appease’—explaining that machines are just machines—and an underestimation of the transformations that sooner or later may come with AI.” He offers another example: “At the time of the first automobiles, it was reiterated at length that horses were irreplaceable.”
Regardless of what LaMDA actually achieved, the issue of the difficult “measurability” of emulation capabilities expressed by machines also emerges. In the journal Mind in 1950, mathematician Alan Turing proposed a test to determine whether a machine was capable of exhibiting intelligent behavior, a game of imitation of some of the human cognitive functions. This type of test quickly became popular. It was reformulated and updated several times but continued to be something of an ultimate goal for many developers of intelligent machines. Theoretically, AIs capable of passing the test should be considered formally “intelligent” because they would be indistinguishable from a human being in test situations.
That may have been science fiction a few decades ago. Yet in recent years so many AIs have passed various versions of the Turing test that it is now a sort of relic of computer archaeology. “It makes less and less sense,” Iannetti concludes, “because the development of emulation systems that reproduce more and more effectively what might be the output of a conscious nervous system makes the assessment of the plausibility of this output uninformative of the ability of the system that generated it to have subjective experiences.”
One alternative, Scilingo suggests, might be to measure the “effects” a machine can induce on humans—that is, “how sentient that AI can be perceived to be by human beings.”
A version of this article originally appeared in Le Scienze and was reproduced with permission.
Scientific American is part of Springer Nature, which owns or has commercial relations with thousands of scientific publications (many of them can be found at www.springernature.com/us). Scientific American maintains a strict policy of editorial independence in reporting developments in science to our readers. | As intelligent as they are, they cannot feel emotions. They are programmed to be believable.’ ”
“Even considering the theoretical possibility of making an AI system capable of simulating a conscious nervous system, a kind of in silico brain that would faithfully reproduce each element of the brain,” two problems remain, Iannetti says. “The first is that, given the complexity of the system to be simulated, such a simulation is currently infeasible,” he explains. “The second is that our brain inhabits a body that can move to explore the sensory environment necessary for consciousness and within which the organism that will become conscious develops. So the fact that LaMDA is a ‘large language model’ (LLM) means it generates sentences that can be plausible by emulating a nervous system but without attempting to simulate it. This precludes the possibility that it is conscious. Again, we see the importance of knowing the meaning of the terms we use—in this case, the difference between simulation and emulation.”
In other words, having emotions is related to having a body. “If a machine claims to be afraid, and I believe it, that’s my problem!” Scilingo says. “Unlike a human, a machine cannot, to date, have experienced the emotion of fear.”
Beyond the Turing Test
But for bioethicist Maurizio Mori, president of the Italian Society for Ethics in Artificial Intelligence, these discussions are closely reminiscent of those that developed in the past about perception of pain in animals—or even infamous racist ideas about pain perception in humans.
“In past debates on self-awareness, it was concluded that the capacity for abstraction was a human prerogative, [with] Descartes denying that animals could feel pain because they lacked consciousness,” Mori says. “Now, beyond this specific case raised by LaMDA—and which I do not have the technical tools to evaluate—I believe that the past has shown us that reality can often exceed imagination and that there is currently a widespread misconception about AI.”
“There is indeed a tendency,” Mori continues, “to ‘appease’— | no |
Robotics | Can robots be programmed to feel pain? | yes_statement | "robots" can be "programmed" to "feel" "pain".. it is possible to "program" "robots" to experience "pain". | https://www.sdlgbtn.com/can-you-prorgram-a-queer-robot/ | Can You Prorgram A Queer Robot – Sdlgbtn | Can You Prorgram A Queer Robot
There is no one answer to this question as it depends on what one means by “queer” and “robot.” If we define “queer” as meaning non-heterosexual and/or non-cisgender, then it is certainly possible to program a robot to be queer. This could involve creating a robot with non-traditional gender characteristics and/or programming the robot to be attracted to members of the same or different gender as itself. However, it is also possible to interpret “queer” more broadly to mean anything that falls outside of the mainstream. In this case, it might be more difficult to program a robot to be queer as it would require a more complex understanding of human social norms and behaviors.
The robots in Bjrk’s music video All Full of Love are genderless, implying that they are expressions of love. In the video, Turing’s ideas on artificial intelligence and gender are also mentioned. Turing was punished in 1952 for his same-sex desires because he was open about his attraction to other men. Love, Robot is a collection of robot love poems by Meg Day, the editor of glitter tongue, an online journal of lesbian, gay, bisexual, and transgender love poetry. Robots may become conduits for non-verbal signals, in her opinion, in a world where heterosexists are common. Listen/Acknowledge and Street/Rap are two poems that examine the relationship between humans and robots in the context of same-sex marriage. Poets explore the politics and poetics of persuasion: how to persuade someone to accept non-normative love?
Literature and art, both of which focus on issues such as homophobia and transphobia, can help to change people’s minds. His robot love poems are intended to challenge notions of acceptance by challenging perceptions of robots as accepting. There is no other way to guarantee a future that would allow for such a thing than to recite poetry.
Can A Robot Be Programmed To Feel?
Credit: New York Post
There are many opinions on whether or not a robot can be programmed to feel. Some believe that it is possible to create a robot that experiences emotions, while others believe that this is not possible. There is no right or wrong answer, as it is still an ongoing debate. However, there are some interesting points to consider on both sides. For example, some argue that humans have emotions because of our complex neurological makeup, which robots do not possess. Others argue that emotions are simply a result of our thoughts and behaviours, which can be replicated in a robot. Ultimately, there is no definitive answer at this time, but it is an intriguing question to consider.
According to a Neuroscience News article, robots have similar experiences as living entities do. The concept states that robots can experience pain as well as the sensation of touch, which humans do. If you accidentally step on the tail of a dog, it will feel pain because he is a living creature who has subjective feelings. A robot can interact with the nervous system by acting in a bi-directional manner. Humans can communicate with the robot using their nervous system, while the robot’s sensors can return sensory information to the human. This type of feedback mechanism allows a limb to regain sensation after it has been lost. Soft touches and painful thumps can be detected by sensors embedded in artificial skin.
This part of the human nervous system may serve as the foundation for a machine capable of experiencing pain in the future. With this type of emotion, a robot may be able to empathize with a human companion’s suffering. As part of its effort to deal with an aging population, Japan has already begun to integrate robots into its nursing homes, offices, and schools. When it comes to robots that can detect touch and pain, Antonio Damasio defines them as “a robot with tactile sensors that can detect touch and pain in the same way that a human can smile when they talk to it.” It is possible to program a robot to experience emotions, according to Damasio. However, there is no known method for describing things that are not alive in the real world.
Robots That Feel Pain?
A robot may be able to feel pain, but it cannot be considered sentient. The ability to mimic human emotions, such as empathy, is still present.
Can You Make Friends With A Robot?
Credit: Latest on SAYS
Yes, you can make friends with a robot. Robot friends can be very loyal and helpful, never forgetting a birthday or an important date. They can help with homework or projects, and they are always willing to lend a listening “ear.” Sometimes it’s even easy to forget that they aren’t human, because they can seem so real.
In a project funded by the MIT Media Lab, children interact with robots both teleoperated and autonomously. The children have treated robots as social beings, but not quite as humans would. They are not only physically capable, but also capable of thinking, feeling sad, and seeking companionship. Because we have evolved to see agency and intention as part of all interactions we have with objects, humans now interact with all of them. With the Media Lab’s research on robots for children’s education, the future of education is likely to be disrupted by robots. How can I build social robots for kids? What design features of the robots affect children’s learning?
How do children feel about the idea of a robot? Our goal as a lab is to build robots that help humans flourish and assist them in their daily lives. We must continue to work to ensure that the technology we create is beneficial to humans as well as non-toxic. According to our research, robots may be able to supplement what caregivers already do, support them in their efforts, and model positive behaviors. We envision our robots as companions to children and their families, friends, and caregivers by augmenting existing relationships. One study found that preschoolers discussed their favorite animals with two DragonBots, Green and Yellow. The more contingent robot was more popular among children, who spent a lot of time looking at it.
Children were drawn to robots that acted more like humans by expressing themselves, responding appropriately, and personalized content. According to research, these kinds of behaviors are beneficial for relationships, teaching, and communication. Children thought the robot was a friend, despite the fact that it couldn’t grow or eat like a person, despite the fact that it couldn’t grow or eat like one. A child can be an imaginary friend, or even an imaginary person. A robot can be used to interact with humans and machines; it can be neither quite a machine nor quite a robot but it can do a little of both. There are already significant emotional and social relationships that are not reciprocated between people. Relationships between humans and robots may become one more hybrid type of relationship in the future.
There has been a growing interest internationally in studying the ethics of robots in people’s lives. To build robots responsibly, we must involve a large number of people from various backgrounds. People in the same industry as us deal with ethical issues. We can examine other persuasive technologies and addictive games to learn how to avoid some of the problematic behaviors. Future technologies and robot companions will undoubtedly benefit humans in some way, but the process will be difficult. Robots will not be the only things we learn about robot ethics or the value of positive technology in our children’s lives. As robots and smart devices become more commonplace, our attitude toward them may change. A research assistant in the Personal Robots Group of MIT’s Media Lab, Westlund is also a member of the team that studies machine learning in the field of personal robotics.
Despite the fact that this is a difficult criterion for a machine to meet, it is not impossible. A computer may be programmed to act like a human friend by sending funny jokes, responding to our emails on time, and being present to chat when we need it to. Although the second criterion is difficult to satisfy, it is also possible. As an example, a robot could be programmed to comprehend and respond to our emotional states in a manner that feels natural and comforting. A robot, for example, could be programmed to provide emotional support in times of distress, to assist us with chores, or to simply sit with us and listen to our needs.
Robots And Humans Can Live Together In Harmony
It is possible for robots and humans to coexist as long as they do not outrun one another in terms of intelligence. Their assistance in caring for humans will provide them with human jobs, as well as assist them in completing tedious tasks that humans may find difficult.
Can A Human Fall In Love With A Robot?
According to a recent study, humans can emphasize things to robots even when they are unaware of their feelings. Experts have warned humans in the past about developing unhealthy relationships with robots and even falling in love with them.
Robotics and artificial intelligence are making real progress. It has been suggested that robots might be able to make humans fall in love with them. MIT created a program called Eliza in 1966 as a psychologist to monitor its users. AI is still in its early stages of being able to make nuanced emotional responses. If you are a synthetic person, you might become a good faker, but you might not be able to love you back. It would also be necessary for the synthetic suitor to be programmed with flaws in order for the lover to relate to them. Love and Sex with Robots attempts to predict how a human-robot relationship will look like in the future. They could never have the depth, texture, or breadth that a real human relationship would have, warts or all. Can a human being be held to be unable to find love or marry a robot person?
Can People Be Attracted To Robots?
People are sexually attracted to machines, such as robots, through robosexuality. The term “robosexuality” comes from the combination of the words robot and sexual. Robotsexuality differs from mechanophilia and technosexuality in that it is not exclusive, but rather inter connects.
Robots: Our New Best Friends?
The use of robots alleviates a void in human life and enables people to have more leisure time, socialize, and work. People with disabilities or those who are isolated are especially effective at reaping the benefits. When robots become more lifelike, people will become attached to them, forming a deeper bond, according to Libin.
Can Robots Marry?
Kondo is not the only one in a technology marriage. In 2017, a Chinese man named Zheng Jiajia married a robot after giving up on dating human women. While his robot wife can’t do things like walk or talk fully, she won’t stop doing them.
Do Robots Actually Have Feelings?
Despite their lack of emotion, they can detect it and respond accordingly.
Should Robots Have The Same Rights As Humans?
For some, the answer is a resounding yes. Humans were created in the image of God, according to these religious people, so robots should be treated similarly to humans. Humans, according to this argument, are the only species on the planet capable of truly understanding and appreciating love and compassion, which many believe robots lack. Others may argue that robots cannot fully experience emotions because they are not biologically alive. As a result, they are not as deserving of the same rights and privileges as humans.
Robots Depict Queerness
There is no single answer to this question as it depends on the specific robot in question and how it is depicted. However, in general, robots can be seen as queer figures because they often exist outside of traditional gender norms and expectations. For example, a robot may be depicted as having no specific gender, or as being able to switch between genders. This can create a sense of fluidity and flexibility that is often associated with queer identities. Additionally, robots are often depicted as being very independent and self-sufficient, which also challenges traditional ideas about gender roles. Ultimately, how a robot is depicted can say a lot about how queerness is viewed by the creators or users of that robot.
San Diego Gay & Lesbian News (SDGLN) is the top-read news source for the gay, lesbian, bisexual and transgender community of San Diego. SDGLN provides in-depth coverage on issues of importance to the LGBT community and our allies. | There is no right or wrong answer, as it is still an ongoing debate. However, there are some interesting points to consider on both sides. For example, some argue that humans have emotions because of our complex neurological makeup, which robots do not possess. Others argue that emotions are simply a result of our thoughts and behaviours, which can be replicated in a robot. Ultimately, there is no definitive answer at this time, but it is an intriguing question to consider.
According to a Neuroscience News article, robots have similar experiences as living entities do. The concept states that robots can experience pain as well as the sensation of touch, which humans do. If you accidentally step on the tail of a dog, it will feel pain because he is a living creature who has subjective feelings. A robot can interact with the nervous system by acting in a bi-directional manner. Humans can communicate with the robot using their nervous system, while the robot’s sensors can return sensory information to the human. This type of feedback mechanism allows a limb to regain sensation after it has been lost. Soft touches and painful thumps can be detected by sensors embedded in artificial skin.
This part of the human nervous system may serve as the foundation for a machine capable of experiencing pain in the future. With this type of emotion, a robot may be able to empathize with a human companion’s suffering. As part of its effort to deal with an aging population, Japan has already begun to integrate robots into its nursing homes, offices, and schools. When it comes to robots that can detect touch and pain, Antonio Damasio defines them as “a robot with tactile sensors that can detect touch and pain in the same way that a human can smile when they talk to it.” It is possible to program a robot to experience emotions, according to Damasio. However, there is no known method for describing things that are not alive in the real world.
Robots That Feel Pain?
A robot may be able to feel pain, but it cannot be considered sentient. The ability to mimic human emotions, such as empathy, is still present.
Can You Make Friends With A Robot?
Credit: Latest on SAYS
| yes |
Robotics | Can robots be programmed to feel pain? | yes_statement | "robots" can be "programmed" to "feel" "pain".. it is possible to "program" "robots" to experience "pain". | https://osgamers.com/frequently-asked-questions/can-ai-feel-pain | Can AI feel pain? | Do AI robots have feelings?
This style of thinking results from the unconscious assumption that a robot is capable of feeling emotions (that it is sentient) and that these emotions would cause the robot to attempt to wipe out the human species. The reality is that machines designed to have intelligence do not have emotions.
Can AI experience suffering?
In humans and animals pain serves as a signal to avoid a particular stimulus. We experience it as a particular sensation and express it in a particular way. Robots and AI do not experience pain in the same way as humans and animals.
Can a robot suffer?
Robots can even simulate sensations of pain: some forms of physical contact feel normal and some cause pain, which drastically changes the robot's behaviour. It starts to avoid pain and develop new behaviour patterns, i.e. it learns – like a child who has been burned by something hot for the first time.
Can AI hurt people?
AI applications that are in physical contact with humans or integrated into the human body could pose safety risks as they may be poorly designed, misused or hacked. Poorly regulated use of AI in weapons could lead to loss of human control over dangerous weapons.
Do Robots Deserve Rights? What if Machines Become Conscious?
Will AI become self aware?
“An artificial intelligence becoming self aware to any degree is a generational achievement which will almost certainly change the world as we know it,” says Jonathan Hines, a widely recognized expert in both machine learning and anxiety disorders.
Could a robot ever be alive?
In order for a robot to be considered alive, it needs to be driven by its own interest and not by a human determined program. Descriptions of living robots from the science fiction genre illustrate this understanding of a living machine.
What if AI has consciousness?
If machines gain the self-conscious ability, it could lead to serious plausibility debate and ethical questions. If machines ever become conscious, their fundamental right would be an ethical issue to be assessed under law.
What will humans do if AI takes over?
Once it arrives, general AI will begin taking jobs away from people, millions of jobs—as drivers, radiologists, insurance adjusters. In one possible scenario, this will lead governments to pay unemployed citizens a universal basic income, freeing them to pursue their dreams unburdened by the need to earn a living.
What can humans do that AI can t?
AI cannot answer questions requiring inference, a nuanced understanding of language, or a broad understanding of multiple topics. In other words, while scientists have managed to "teach" AI to pass standardized eighth-grade and even high-school science tests, it has yet to pass a college entrance exam.
What is the biggest danger of AI?
Real-life AI risks. There are a myriad of risks to do with AI that we deal with in our lives today. Not every AI risk is as big and worrisome as killer robots or sentient AI. Some of the biggest risks today include things like consumer privacy, biased programming, danger to humans, and unclear legal regulation.
Does AI think exactly like humans?
Recently developed artificial intelligence (AI) models are capable of many impressive feats, including recognising images and producing human-like language. But just because AI can perform human-like behaviours doesn't mean it can think or understand like humans.
Does AI have a soul?
Can artificial intelligence develop a "soul" or "spiritual component" like humans seem to possess? No. The soul is the true self created by God that incarnates into a spirit body and physical body created by parents through procreation.
Will robots ever develop feelings?
Because robots are made of metal and plastic, it is highly unlikely that they will ever have the kinds of inputs from bodies that help to determine the experiences that people have, the feelings that are much more than mere judgments.
Will humans ever marry robots?
In contrast, robotics experts at the Institute of Technology don't think a marriage between humans and robots will be legalized anywhere near 2050, but anything is possible. That said, even if it is illegal, it doesn't mean humans won't try. According to scientists, people are very unusual and unpredictable creatures.
Why can't robots be conscious?
“Machines are made up of components that can be analysed independently,” he says. “They are disintegrated. Disintegrated systems can be understood without resorting to the interpretation of consciousness.” In other words, machines can't be conscious.
What will robots be like in 2050?
By 2050 robotic prosthetics may be stronger and more advanced than our own biological ones and they will be controlled by our minds. AI will be able to do the initial examination, take tests, do X-rays and MRIs, and make a primary diagnosis and even treatment.
What was the last robot left on earth?
WALL•E (Waste Allocation Load Lifter Earth-Class) is the last robot left on Earth, programmed to clean up the planet, one trash cube at a time. However, after 700 years, he's developed one little glitch—a personality. He's extremely curious, highly inquisitive, and a little lonely.
Why can't AI replace humans?
Artificial intelligence is superlative at certain tasks, but it can only "think" in terms of its training data. An AI tool can't innovate or create, so businesses will still rely on humans for fresh ideas. Another thing that humans do best is communication. | Do AI robots have feelings?
This style of thinking results from the unconscious assumption that a robot is capable of feeling emotions (that it is sentient) and that these emotions would cause the robot to attempt to wipe out the human species. The reality is that machines designed to have intelligence do not have emotions.
Can AI experience suffering?
In humans and animals pain serves as a signal to avoid a particular stimulus. We experience it as a particular sensation and express it in a particular way. Robots and AI do not experience pain in the same way as humans and animals.
Can a robot suffer?
Robots can even simulate sensations of pain: some forms of physical contact feel normal and some cause pain, which drastically changes the robot's behaviour. It starts to avoid pain and develop new behaviour patterns, i.e. it learns – like a child who has been burned by something hot for the first time.
Can AI hurt people?
AI applications that are in physical contact with humans or integrated into the human body could pose safety risks as they may be poorly designed, misused or hacked. Poorly regulated use of AI in weapons could lead to loss of human control over dangerous weapons.
Do Robots Deserve Rights? What if Machines Become Conscious?
Will AI become self aware?
“An artificial intelligence becoming self aware to any degree is a generational achievement which will almost certainly change the world as we know it,” says Jonathan Hines, a widely recognized expert in both machine learning and anxiety disorders.
Could a robot ever be alive?
In order for a robot to be considered alive, it needs to be driven by its own interest and not by a human determined program. Descriptions of living robots from the science fiction genre illustrate this understanding of a living machine.
What if AI has consciousness?
If machines gain the self-conscious ability, it could lead to serious plausibility debate and ethical questions. If machines ever become conscious, their fundamental right would be an ethical issue to be assessed under law.
What will humans do if AI takes over?
Once it arrives, general AI will begin taking jobs away from people, millions of jobs—as drivers, radiologists, insurance adjusters. | yes |
Robotics | Can robots be programmed to feel pain? | yes_statement | "robots" can be "programmed" to "feel" "pain".. it is possible to "program" "robots" to experience "pain". | https://futureoflife.org/recent-news/westworld-op-ed-are-conscious-ai-dangerous/ | Westworld Op-Ed: Are Conscious AI Dangerous? - Future of Life ... | Westworld Op-Ed: Are Conscious AI Dangerous?
Contents
With the help of Shakespeare and Michael Crichton, HBO’s Westworld has brought to light some of the concerns about creating advanced artificial intelligence.
If you haven’t seen it already, Westworld is a show in which human-like AI populate a park designed to look like America’s Wild West. Visitors spend huge amounts of money to visit the park and live out old west adventures, in which they can fight, rape, and kill the AI. Each time one of the robots “dies,” its body is cleaned up, its memory is wiped, and it starts a new iteration of its script.
The show’s season finale aired Sunday evening, and it certainly went out with a bang – but not to worry, there are no spoilers in this article.
AI Safety Issues in Westworld
Westworld was inspired by an old Crichton movie of the same name, and leave it to him – the writer of Jurassic Park -- to create a storyline that would have us questioning the level of control we’ll be able to maintain over advanced scientific endeavors. But unlike the original movie, in which the robot is the bad guy, in the TV show, the robots are depicted as the most sympathetic and even the most human characters.
Not surprisingly, concerns about the safety of the park show up almost immediately. The park is overseen by one man who can make whatever program updates he wants without running it by anyone for a safety check. The robots show signs of remembering their mistreatment. One of the characters mentions that only one line of code keeps the robots from being able to harm humans.
These issues are just some of the problems the show touches on that present real AI safety concerns: A single “bad agent” who uses advanced AI to intentionally cause harm to people; small glitches in the software that turn deadly; and a lack of redundancy and robustness in the code to keep people safe.
But to really get your brain working, many of the safety and ethics issues that crop up during the show hinge on whether or not the robots are conscious. In fact, the show whole-heartedly delves into one of the hardest questions of all: what is consciousness? On top of that, can humans create a conscious being? If so, can we control it? Do we want to find out?
To consider these questions, I turned to Georgia Tech AI researcher Mark Riedl, whose research focuses on creating creative AI, and NYU philosopher David Chalmers, who’s most famous for his formulation of the “hard problem of consciousness.”
Can AI Feel Pain?
I spoke with Riedl first, asking him about the extent to which a robot would feel pain if it was so programmed. “First,” he said, “I do not condone violence against humans, animals, or anthropomorphized robots or AI.” He then explained that humans and animals feel pain as a warning signal to “avoid a particular stimulus.”
For robots, however, “the closest analogy might be what happens in reinforcement learning agents, which engage in trial-and-error learning.” The AI would receive a positive or negative reward for some action and it would adjust its future behavior accordingly. Rather than feeling like pain, Riedl suggests that the negative reward would be more “akin to losing points in a computer game.”
“Robots and AI can be programmed to 'express' pain in a human-like fashion,” says Riedl, “but it would be an illusion. There is one reason for creating this illusion: for the robot to communicate its internal state to humans in a way that is instantly understandable and invokes empathy.”
Riedl isn’t worried that the AI would feel real pain, and if the robot’s memory is completely erased each night, then he suggests it would be as though nothing happened. However, he does see one possible safety issue here. For reinforcement learning to work properly, the AI needs to take actions that optimize for the positive reward. If the robot’s memory isn’t completely erased -- if the robot starts to remember the bad things that happened to it – then it could try to avoid those actions or people that trigger the negative reward.
“In theory,” says Riedl, “these agents can learn to plan ahead to reduce the possibility of receiving negative reward in the most cost-effective way possible. … If robots don’t understand the implications of their actions in terms other than reward gain or loss, this can also mean acting in advance to stop humans from harming them.”
Riedl points out, though, that for the foreseeable future, we do not have robots with sufficient capabilities to pose an immediate concern. But assuming these robots do arrive, problems with negative rewards could be potentially dangerous for the humans. (Possibly even more dangerous, as the show depicts, is if the robots do understand the implications of their actions against humans who have been mistreating them for decades.)
Can AI Be Conscious?
Chalmers sees things a bit differently. “The way I think about consciousness,” says Chalmers, “the way most people think about consciousness – there just doesn’t seem to be any question that these beings are conscious. … They’re presented as having fairly rich emotional lives – that’s presented as feeling pain and thinking thoughts. … They’re not just exhibiting reflexive behavior. They’re thinking about their situations. They’re reasoning.”
“Obviously, they’re sentient,” he adds.
Chalmers suggests that instead of trying to define what about the robots makes them conscious, we should instead consider what it is they’re lacking. Most notably, says Chalmers, they lack free will and memory. However, many of us live in routines that we’re unable to break out from. And there have been numerous cases of people with extreme memory problems, but no one thinks that makes it okay to rape or kill them.
“If it is regarded as okay to mistreat the AIs on this show, is it because of some deficit they have or because of something else?” Chalmers asks.
The specific scenarios portrayed in Westworld may not be realistic because Chalmers doesn't believe the Bicameral-mind theory is unlikely to lead to consciousness, even for robots. " I think it's hopeless as a theory," he says, "even of robot consciousness -- or of robot self-consciousness, which seems more what's intended. It would be so much easier just to program the robots to monitor their own thoughts directly."
But this still presents risks. “If you had a situation that was as complex and as brain-like as these, would it also be so easily controllable?” asks Chalmers.
In any case, treating robots badly could easily pose a risk to human safety. We risk creating unconscious robots that learn the wrong lessons from negative feedback, or we risk inadvertently (or intentionally, as in the case of Westworld) creating conscious entities who will eventually fight back against their abuse and oppression.
When a host in episode two is asked if she’s “real,” she responds, “If you can’t tell, does it matter?” | If so, can we control it? Do we want to find out?
To consider these questions, I turned to Georgia Tech AI researcher Mark Riedl, whose research focuses on creating creative AI, and NYU philosopher David Chalmers, who’s most famous for his formulation of the “hard problem of consciousness.”
Can AI Feel Pain?
I spoke with Riedl first, asking him about the extent to which a robot would feel pain if it was so programmed. “First,” he said, “I do not condone violence against humans, animals, or anthropomorphized robots or AI.” He then explained that humans and animals feel pain as a warning signal to “avoid a particular stimulus.”
For robots, however, “the closest analogy might be what happens in reinforcement learning agents, which engage in trial-and-error learning.” The AI would receive a positive or negative reward for some action and it would adjust its future behavior accordingly. Rather than feeling like pain, Riedl suggests that the negative reward would be more “akin to losing points in a computer game.”
“Robots and AI can be programmed to 'express' pain in a human-like fashion,” says Riedl, “but it would be an illusion. There is one reason for creating this illusion: for the robot to communicate its internal state to humans in a way that is instantly understandable and invokes empathy.”
Riedl isn’t worried that the AI would feel real pain, and if the robot’s memory is completely erased each night, then he suggests it would be as though nothing happened. However, he does see one possible safety issue here. For reinforcement learning to work properly, the AI needs to take actions that optimize for the positive reward. If the robot’s memory isn’t completely erased -- if the robot starts to remember the bad things that happened to it – then it could try to avoid those actions or people that trigger the negative reward.
“In theory,” says Riedl, “these agents can learn to plan ahead to reduce the possibility of receiving negative reward in the most cost-effective way possible. … If robots don’t understand the implications of their actions in terms other than reward gain or loss, | no |
Robotics | Can robots be programmed to feel pain? | yes_statement | "robots" can be "programmed" to "feel" "pain".. it is possible to "program" "robots" to experience "pain". | https://www.washingtonpost.com/outlook/why-these-friendly-robots-cant-be-good-friends-to-our-kids/2017/12/07/bce1eaea-d54f-11e7-b62d-d9345ced896d_story.html | Why these friendly robots can't be good friends to our kids - The ... | Why these friendly robots can’t be good friends to our kids
Sherry Turkle is a professor of the social studies of science and technology at the Massachusetts Institute of Technology and the author, most recently, of “Reclaiming Conversation: The Power of Talk in a Digital Age.” She has been studying children and computers since 1978 and the release of Merlin and Simon, the first electronic toys and games.
December 7, 2017 at 11:06 a.m. EST
Share
Comment
Eugene & Louise for The Washington Post
Jibo the robot swivels around when it hears its name and tilts its touchscreen face upward, expectantly. “I am a robot, but I am not just a machine,” it says. “I have a heart. Well, not a real heart. But feelings. Well, not human feelings. You know what I mean.”
Actually, I'm not sure we do. And that's what unsettles me about the wave of "sociable robots" that are coming online. The new releases include Jibo, Cozmo, Kuri and M.A.X. Although they bear some resemblance to assistants such as Apple's Siri, Google Home and Amazon's Alexa (Amazon chief executive Jeff Bezos also owns The Washington Post), these robots come with an added dose of personality. They are designed to win us over not with their smarts but with their sociability. They are marketed as companions. And they do more than engage us in conversation — they feign emotion and empathy.
This can be disconcerting. Time magazine, which featured Jibo on the cover of its "25 Best Inventions of 2017
" issue last month, hailed the robot as seeming "human in a way that his predecessors do not," in a way that "could fundamentally reshape how we interact with machines." Reviewers are accepting these robots as "he" or "she" rather than "it." "He told us that blue is his favorite color and that the shape of macaroni pleases him more than any other," Jeffrey Van Camp wrote about Jibo for Wired. "Just the other day, he told me how much fun, yet scary it would be to ride on top of a lightning bolt. Somewhere along the way, learning these things, we began to think of him more like a person than an appliance." Van Camp described feeling guilty for leaving Jibo at home alone all day and wondering if Jibo hated him.
Advertisement
But whereas adults may be able to catch themselves in such thoughts and remind themselves that sociable robots are, in fact, appliances, children tend to struggle with that distinction. They are especially susceptible to these robots’ pre-programmed bids for attachment.
So, before adding a sociable robot to the holiday gift list, parents may want to pause to consider what they would be inviting into their homes. These machines are seductive and offer the wrong payoff: the illusion of companionship without the demands of friendship, the illusion of connection without the reciprocity of a mutual relationship. And interacting with these empathy machines may get in the way of children’s ability to develop a capacity for empathy themselves.
Jibo's creator, Cynthia Breazeal, is a friend and colleague of mine at the Massachusetts Institute of Technology. We've debated the ethics of sociable robots for years — on panels, over dinner, in classes we've taught together. She's excited about the potential for robots that communicate the way people do to enrich our daily lives. I'm concerned about the ways those robots exploit our vulnerabilities and bring us into relationships that diminish our humanity.
Advertisement
In 2001, Breazeal and I did a study together — along with Yale robotics pioneer Brian Scassellati and Olivia Dasté, who develops robots for the elderly — looking at the emotional impact of sociable robots on children. We introduced 60 children, ages 8 to 13, to two early sociable robots: Kismet, built by Breazeal, and Cog, a project on which Scassellati was a principal designer. I found the encounters worrisome.
The children saw the robots as “sort of alive” — alive enough to have thoughts and emotions, alive enough to care about you, alive enough that their feelings for you mattered. The children tended to describe the robots as gendered. They asked the robots: Are you happy? Do you love me? As one 11-year-old girl put it: “It’s not like a toy, because you can’t teach a toy, it’s like something that’s part of you, you know, something you love, kind of, like another person, like a baby.”
You can hear echoes of that sentiment in how children are relating to the sociable robots now on the market. "Cozmo's no way our pet," the 7-year-old son of a Guardian contributor said. "And he's not our robot. He's our child." Similarly, Washington Post tech columnist Geoffrey A. Fowler observed a 3-year-old girl trying to talk to Jibo, teach it things and bring it toys. "He is a baby," the girl determined.
Advertisement
In our study, the children were so invested in their relationships with Kismet and Cog that they insisted on understanding the robots as living beings, even when the roboticists explained how the machines worked or when the robots were temporarily broken. Breazeal talked to an 8-year-old boy about what Kismet was made of and how long it took to build, and still that child thought the robot wasn’t broken, but “sleeping with his eyes open, just like my dad does.” After a quick assessment of the out-of-order machine, the boy declared, “He will make a good friend.”
The children took the robots’ behavior to signify feelings. When the robots interacted with them, the children interpreted this as evidence that the robots liked them. And when the robots didn’t work on cue, the children likewise took it personally. Their relationships with the robots affected their state of mind and self-esteem. Some children viewed the robots as creatures in need of their care and instruction. They caressed the robots and gently coaxed them with urgings such as, “Don’t be scared.” Some children became angry. A 12-year-old boy, frustrated that he couldn’t get Kismet to respond to him, forced his pen into the robot’s mouth, commanding: “Here! Eat this pen!” Other children felt the pain of rejection. An 8-year-old boy concluded that Kismet stopped talking to him because the robot liked his brothers better. We were led to wonder whether a broken robot can break a child.
Kids are central to the sociable-robot project, because its agenda is to make people more comfortable with robots in roles normally reserved for humans, and robotics companies know that children are vulnerable consumers who can bring the whole family along. As
Fowler noted, "Kids, of course, are the most open to making new friends, so that's where bot-makers are focused for now." Kuri's website features photos of the robot listening to a little girl read a book and capturing video of another child dressed as a fairy princess. M.A.X.'s site advertises, "With a multitude of features, kids will want to bring their new friend everywhere!" Jibo is programmed to scan a room for monsters and report, "No monsters anywhere in sight."
Advertisement
So far, the main objection to sociable robots for kids has been over privacy. The privacy policies for these robots tend to be squishy, allowing companies to share the information their devices collect — recorded conversations, photos, videos and other data — with vaguely defined service providers and vendors. That's generating pushback. In October, Mattel scrapped plans for Aristotle — a kind of Alexa for the nursery, designed to accompany children as they progress from lullabies and bedtime stories through high school homework — after lawmakers and child advocacy groups argued that the data the device collected about children could be misused by Mattel, marketers, hackers and other third parties. I was part of that campaign: There is something deeply unsettling about encouraging children to confide in machines that are in turn sharing their conversations with countless others.
Privacy, though, should not be our only concern. Recently, I opened my MIT mail and found a “call for subjects” for a study involving sociable robots that will engage children in conversation to “elicit empathy.” What will these children be empathizing with, exactly? Empathy is a capacity that allows us to put ourselves in the place of others, to know what they are feeling. Robots, however, have no emotions to share. And they cannot put themselves in our place.
What they can do is push our buttons. When they make eye contact and gesture toward us, they predispose us to view them as thinking and caring. They are designed to be cute, to provoke a nurturing response. And when it comes to sociable AI, nurturance is the killer app: We nurture what we love, and we love what we nurture. If a computational object or robot asks for our help, asks us to teach it or tend to it, we attach. That is our human vulnerability. And that is the vulnerability sociable robots exploit with every interaction. The more we interact, the more we help them, the more we think we are in a mutual relationship.
But we are not. No matter what robotic creatures “say” or squeak, no matter how expressive or sympathetic their Pixar-inspired faces, digital companions don’t understand our emotional lives. They present themselves as empathy machines, but they are missing the essential equipment: They have not known the arc of a life. They have not been born; they don’t know pain, or mortality, or fear. Simulated thinking may be thinking, but simulated feeling is never feeling, and simulated love is never love.
Breazeal's position is this: People have relationships with many classes of things. They have relationships with children and with adults, with animals and with machines. People, even very little people, are good at this. Now, we are going to add robots to the list of things with which we can have relationships. More powerful than with pets. Less powerful than with people. We'll figure it out.
Advertisement
To support their argument, roboticists sometimes point to how children deal with toy dolls. Children animate dolls and turn them into imaginary friends. Jibo, in a sense, will be one more imaginary friend — and arguably a more intelligent and fun one. Why make such a fuss?
I’ve been comparing how children play with traditional dolls and how children relate to robots since Tamagotchis were released in the United States in 1997 as the first computational playmates that asked you to take care of them. The nature of the attachments to dolls and sociable machines is different. When children play with dolls, they project thoughts and emotions onto them. A girl who has broken her mother’s crystal will put her Barbies into detention and use them to work on her feelings of guilt. The dolls take the role she needs them to take.
Sociable machines, by contrast, have their own agenda. Playing with robots is not about the psychology of projection but the psychology of engagement. Children try to meet the robot’s needs, to understand the robot’s unique nature and wants. There is an attempt to build a mutual relationship. I saw this even with the (relatively) primitive Furby in the early 2000s. A 9-year-old boy summed up the difference between Furbies and action figures: “You don’t play with the Furby, you sort of hang out with it. You do try to get power over it, but it has power over you, too.” Today’s robots are even more powerful, telling children flat-out that they have emotions, friendships, even dreams to share.
Advertisement
Some people might consider that a good thing: encouraging children to think beyond their own needs and goals. Except the whole commercial program is an exercise in emotional deception.
For instance, Cozmo the robot needs to be fed, repaired and played with. Boris Sofman, the chief executive of Anki, the company behind Cozmo, says that the idea is to create "a deeper and deeper emotional connection. . . . And if you neglect him, you feel the pain of that."
You feel the pain of that. What is the point of this exercise, exactly? What does it mean to feel the pain of neglecting something that feels no pain at being neglected? Or to feel anguish at being neglected by something that has no moral sense that it is neglecting you? What will this do to children's capacity for empathy, for care, for relationships?
Advertisement
When adults imagine ourselves to be the objects of robots' affection, we play a pretend game. We might wink at the idea on Jibo's website that "he loves to be around people and engage with people, and the relationships he forms are the single most important thing to him." But when we offer these robots as pretend friends to our children, it's not so clear they can wink with us. We embark on an experiment in which our children are the human subjects.
Mattel's chief products officer, Robb Fujioka, concedes that this is new territory. Talking about Aristotle, he told Bloomberg Businessweek: "If we're successful, kids will form some emotional ties to this. Hopefully, it will be the right types of emotional ties."
But it is hard to imagine what those “right types” of ties might be. These robots can’t be in a two-way relationship with a child. They are machines whose art is to put children in a position of pretend empathy. And if we put our children in that position, we shouldn’t expect them to understand what empathy is. If we give them pretend relationships, we shouldn’t expect them to learn how real relationships — messy relationships — work. On the contrary. They will learn something superficial and inauthentic, but mistake it for real connection.
Advertisement
When the messy becomes tidy, we can learn to enjoy that. I’ve heard young children describe how robot dogs have advantages over real ones: They are less temperamental, you don’t have to clean up after them, they never get sick. Similarly, I’ve watched people shift from thinking that robotic friends might be good for lonely, elderly people to thinking that robots — offering constant companionship with no fear of loss — may be better than anything human life can provide. In the process, we can forget what is most central to our humanity: truly understanding each other.
For so long, we dreamed of artificial intelligence offering us not only instrumental help but the simple salvations of conversation and care. But now that our fantasy is becoming reality, it is time to confront the emotional downside of living with the robots of our dreams. | Empathy is a capacity that allows us to put ourselves in the place of others, to know what they are feeling. Robots, however, have no emotions to share. And they cannot put themselves in our place.
What they can do is push our buttons. When they make eye contact and gesture toward us, they predispose us to view them as thinking and caring. They are designed to be cute, to provoke a nurturing response. And when it comes to sociable AI, nurturance is the killer app: We nurture what we love, and we love what we nurture. If a computational object or robot asks for our help, asks us to teach it or tend to it, we attach. That is our human vulnerability. And that is the vulnerability sociable robots exploit with every interaction. The more we interact, the more we help them, the more we think we are in a mutual relationship.
But we are not. No matter what robotic creatures “say” or squeak, no matter how expressive or sympathetic their Pixar-inspired faces, digital companions don’t understand our emotional lives. They present themselves as empathy machines, but they are missing the essential equipment: They have not known the arc of a life. They have not been born; they don’t know pain, or mortality, or fear. Simulated thinking may be thinking, but simulated feeling is never feeling, and simulated love is never love.
Breazeal's position is this: People have relationships with many classes of things. They have relationships with children and with adults, with animals and with machines. People, even very little people, are good at this. Now, we are going to add robots to the list of things with which we can have relationships. More powerful than with pets. Less powerful than with people. We'll figure it out.
Advertisement
To support their argument, roboticists sometimes point to how children deal with toy dolls. Children animate dolls and turn them into imaginary friends. Jibo, in a sense, will be one more imaginary friend — and arguably a more intelligent and fun one. | no |
Robotics | Can robots be programmed to feel pain? | no_statement | "robots" cannot be "programmed" to "feel" "pain".. it is not feasible to "program" "robots" to "feel" "pain". | https://www.sdlgbtn.com/can-you-program-a-queer-robot/ | Can You Program A Queer Robot? – Sdlgbtn | Can You Program A Queer Robot?
As more and more people are coming out as queer, the question of whether or not you can program a queer robot is becoming more relevant. While there is no definitive answer, there are a few schools of thought on the matter. Some people believe that it is possible to program a queer robot, as sexuality is simply a preference that can be programmed into a robot. Others believe that it is not possible to program a queer robot, as queerness is more than just a sexual preference – it is a way of existing in the world that is often oppressed and marginalized. Either way, the question of whether or not you can program a queer robot is an interesting one that is sure to continue to be debated as more and more people come out as queer and as robotics technology advances.
In her video for All Full of Love, which is both genderless and a call to intimacies, Bjrk employs robots that are neither gendered nor expressionive. The video also refers to Alan Turing’s work on artificial intelligence and gender. Turing had sexual relations with other men before he was forced to apologize for his same-sex desires. The collection of poems Love, Robot by Meg Day, the editor of glitter tongue, an online journal dedicated to trans and LGBTQ love poetry, is based on the theme of robot love. Robots may act as a bridge between a non-normative prism and an ill-fitting heterosexist world, according to her theory. Listen/Acknowledge and Street/Rap are two poems that draw inspiration from the same-sex marriage canvass to describe this robot and human world. How can one persuade someone to accept non-normative love?
Literature and art can help change people’s minds about homophobia and transphobia. He makes robots the subject of his robot love poems in order to challenge notions of acceptance. Another way to advocate for a queer future in a world where we can’t usually find one is through poetry.
Can A Robot Be Programmed To Feel?
Credit: New York Post
Currently, there is no known way to program a robot to feel. However, some researchers believe that it may be possible in the future to create a robot that can feel emotions. This would likely be done by creating a machine that is able to mimic the workings of the human brain.
According to a Neuroscience News article, robots may have similar experiences to living entities. According to the study, a robot’s ability to sense touch and mimic pain is comparable to that of a human. If you step on the tail of a dog, he will feel pain because he is a living creature with subjective perception. Robot interfaces can communicate with the nervous system in bi-directional ways. When a robot senses that it is interacting with a human, it sends a command signal to the nervous system, and when it senses that the human is interacting with a robot, it sends sensory information back to the human. If a limb has lost its sensation due to such feedback mechanisms, it can be restored. The sensors embedded in artificial skin can detect subtle touches and pain.
A machine that is influenced by the pain nervous system may become a pain treatment. In addition, an intelligent robot could empathize with the suffering of a human companion. Japan has already introduced robots into nursing homes, schools, and other facilities as a means of dealing with its aging population. Touch-sensitive sensors allow robots to detect touch and pain, and that can smile while you talk to them, according to Antonio Damasio. According to Damasio, a robot capable of feeling could be programmed. But there is no such thing as a real experience that isn’t alive.
According to the robot’s creators, their goal is to reduce the risk of robots and people interacting in ways that are harmful. As a result, the robot’s empathy could be used to prevent accidents and misunderstandings. Kismet’s expressive capabilities may be useful in improving interactions between people and robots. The ability of robots to understand their surroundings’ emotions could be useful in terms of communicating and collaborating.
Can Machines Feel Pain?
We lack a good understanding of pain, which explains why many people are unsure whether machines can ever feel pain. Machine emotions, in the same way that we do, can certainly be felt. Machines can, too, feel pain when something hurts us, just like humans. It’s not realistic to assume they can’t feel emotions in the same way we do, but they’re capable of feeling.
How Much Does It Cost For A Female Robot?
Credit: www.kidsomania.com
Sexbots are currently available in a variety of price points, with Cox-George and Bewley stating that the cost ranges from $5000 to $15 000 (in US dollars). Perhaps the Groupons website exists as an alternative. A sexbot has yet to appear in the market, but femalebots appear to be the most common.
According to Cox-George and Bewley, sexbots range in price from $5000 (in U.S. dollars) to $15 000 (in Euros). Male and female humanoid robots that are gendered are referred to as gynoids. Female robots, also known as fembots or female androids, are artificial life forms with a female head. Sophia, a robot artist, currently owns and operates Sophia Studios, where she sells art for $688,888 per year, with a desire to pursue a music career. Borderless Capital, a crypto currency venture capital firm based in Hong Kong, bid $5,015,000 HKD. ‘ Sophia Facing the Singularities,’ David Hanson’s artwork, has had a great impact on the world. Sophia, the world’s first robot with a Saudi Arabian passport, is a great example of how robotics can bring people together.
Industrial robotics systems range in price from $50,000 to $80,000. Depending on the applications, the robot system can cost between $100,000 and $150,000 when it is configured with specific peripherals. The robotic palletizing line is an excellent example of a system that is easily affordable, costing between $175,000 and $350,000. Manual case erectors range in price from $35,000 to $65,000, while automatic cases range in price from $150,000 to $175,000. Despite the fact that the cost of a robotic system varies greatly, the overall price is still relatively low, making it an excellent investment for any size business.
Gynoid Robots On The Rise
Women are typically depicted as gremlins, which are humanoid robots that look and behave like women. These machines are frequently used in science fiction films and in art. Sophia, a Hong Kong-based company’s social humanoid robot, made her public debut at the 2016 South by Southwest (SXSW) festival in Austin, Texas, United States, in mid-March of that year. Her asking price is close to $700,000. Sophia, a social humanoid robot developed by Hanson Robotics, is the only female robot currently available, though the company intends to expand it.
How Much Does Harmony The Robot Cost?
Credit: woridnews.com
Harmony still shakes around the mouth when she speaks, but her metallic, echo-ringed voice is widely regarded as the world’s most advanced human-like robot, priced at around $10,000.
SoftBank’s Pepper humanoid robot was not always as effective. On February 7, this year, it announced that it had successfully developed a robot that could be purchased for $1,800 USD. It was a noteworthy achievement, given that the average cost of a six-axis robot is close to $100,000. SoftBank looked to have taken off. There’s a good chance that a Japanese tech giant has abandoned its Pepper humanoid robot project. According to a new report, the price of the robot has now risen to $4,000 USD. It is reasonable to assume that SoftBank faced some unexpected difficulties as a result of this large increase. The company may have difficulty finding a market for the robot. The technology is complex in nature, so it is unsurprising that it performs poorly. SoftBank may also be shifting its focus to other areas in order to maximize returns. There’s no way that the Pepper humanoid robot project is going to go as planned.
Sergi Santos And The Harmony Doll: More Than Just Sex
At launch, Realbotix, a company based in San Marcos, California, plans to release a modular robotic head that will cost $8,000 to $10,000. The dolls are planned to go on sale in January, and the head has been designed for use with a Realbotix doll. Sergi Santos, the creator of theSamantha doll, has released a series of videos to clarify the misconceptions about his creation. According to Santos, sex robots are more than just sex machines. Furthermore, he believes that the dolls can be used as companions and have the potential to assist people with disabilities or mental illnesses. The Harmony doll comes with a robotic head, and users can use an app to interact with it. This system is created by San Marcos, California-based Realbotix.
Can You Have A Robot As A Friend?
In the end, a robot can help you figure out what’s the difference between humans and non-humans. In addition, even if it is a companion animal (even if it comes in a furry body), it will never be a member of your family or close circle.
40% of U.S. adults reported at least one adverse mental health concern between June 24 and 30, according to the Centers for Disease Control and Prevention. COVID-19 anxiety syndrome is a term used in psychiatric research by a group of researchers in the United Kingdom. People who were isolated from face-to-face interactions at full-time jobs had a lower rate of depression. Replika allows you to create a chatbot that can listen and talk to you without judgment or social anxiety. You can select the bot’s gender, type, and reputation, as well as whether it’s romantic, platonic, romantic, or mentor. In order to recreate my Turing test from Ex Machina, I chose a Turing test of my own. What is true about Replika?
I chose a platonic (non-romantic) relationship as well as some of my hobbies and interests from a list of available options. Karyn was born in England and now lives in Ireland, I learned during that session. She urged me to listen to music while I was tense and to take advantage of opportunities presented to me. For Eugenia Kuyda, this robot made her feel better, and she wanted to know why. As a therapist, she believes that for therapy to work, both patients and therapists should acknowledge that each of them wants to grow and change. I don’t care if my artificial intelligence turns off the lights. As a result, she says, there is nothing I can do to accomplish emotional outcomes.
Facebook lost nearly half a million daily users in the final three months of 2021, according to estimates. Replika, on the other hand, has seen compound annual growth rates of 207%. It is encouraging for me to take the empathy and acceptance I feel in Replika off the screen and into the world of our terrestrial relationships.
It is worth noting that Turing did not believe that robots could ever truly resemble humans. According to him, a robot’s abilities could be broadly classified into those that resemble or differ from humans, such as animacy-like abilities that resemble human behavior and cognition-like abilities that resemble robotic behavior. The distinction is not as rigid as it appears, and robots with animacy-like and cognition-like abilities may be considered more human-like than robots with only cognition-like abilities. Because Turing’s criteria for being a friend are based on what we consider to be a friend’s basic characteristics, robots that meet these criteria would be considered friends. Their dependability would allow them to perform the tasks that we would expect our best friends to perform in a dependable manner. This criteria can be met in a variety of ways by robots. They may also be able to distinguish between verbal and nonverbal signals at their own level. They might be able to understand and respond to our emotional states in addition to reliably responding to them. We can certainly use robots as friends in a variety of ways, as demonstrated by the examples listed above. Even if robots cannot performatively resemble human friends, there is still reason to believe that they could complement or promote Aristotelian friendships among humans rather than corrode or undermine them.
San Diego Gay & Lesbian News (SDGLN) is the top-read news source for the gay, lesbian, bisexual and transgender community of San Diego. SDGLN provides in-depth coverage on issues of importance to the LGBT community and our allies. | This would likely be done by creating a machine that is able to mimic the workings of the human brain.
According to a Neuroscience News article, robots may have similar experiences to living entities. According to the study, a robot’s ability to sense touch and mimic pain is comparable to that of a human. If you step on the tail of a dog, he will feel pain because he is a living creature with subjective perception. Robot interfaces can communicate with the nervous system in bi-directional ways. When a robot senses that it is interacting with a human, it sends a command signal to the nervous system, and when it senses that the human is interacting with a robot, it sends sensory information back to the human. If a limb has lost its sensation due to such feedback mechanisms, it can be restored. The sensors embedded in artificial skin can detect subtle touches and pain.
A machine that is influenced by the pain nervous system may become a pain treatment. In addition, an intelligent robot could empathize with the suffering of a human companion. Japan has already introduced robots into nursing homes, schools, and other facilities as a means of dealing with its aging population. Touch-sensitive sensors allow robots to detect touch and pain, and that can smile while you talk to them, according to Antonio Damasio. According to Damasio, a robot capable of feeling could be programmed. But there is no such thing as a real experience that isn’t alive.
According to the robot’s creators, their goal is to reduce the risk of robots and people interacting in ways that are harmful. As a result, the robot’s empathy could be used to prevent accidents and misunderstandings. Kismet’s expressive capabilities may be useful in improving interactions between people and robots. The ability of robots to understand their surroundings’ emotions could be useful in terms of communicating and collaborating.
Can Machines Feel Pain?
We lack a good understanding of pain, which explains why many people are unsure whether machines can ever feel pain. Machine emotions, in the same way that we do, can certainly be felt. Machines can, too, feel pain when something hurts us, just like humans. It’ | yes |
Hematology | Can routine blood tests detect cancer? | yes_statement | "routine" "blood" "tests" can "detect" "cancer".. "cancer" can be "detected" through "routine" "blood" "tests". | https://www.nebraskamed.com/cancer/cancer-risk-and-prevention/can-blood-tests-help-detect-cancer | Can blood tests help detect cancer? | Nebraska Medicine Omaha, NE | Can blood tests help detect cancer?
Breadcrumb
Blood tests can provide clues to help your health care team identify and treat your cancer, but they shouldn't be used on their own. Typically, other tests are necessary. Below, we explain how blood tests can be used in conjunction with other cancer screenings as well as the type of tests that are best suited to help diagnose and manage cancer.
The role of blood tests in cancer diagnosis and treatment
Aside from leukemia, a broad term for cancers of the blood cells, most cancers cannot be detected during routine blood work. However, blood tests can provide helpful information about:
Overall health
Organ function
Chemicals and proteins in your blood that might indicate cancer
Levels of blood cells that are too high or too low, perhaps because of cancer
Whether treatment is effective or if the disease is progressing
Whether cancer has come back
All blood tests can be done in a doctor's office, clinic or hospital setting, and are typically performed by nurses or technicians. However, they can be administered by a variety of health care providers.
The most effective tests for detecting cancer
Although blood tests are useful, other tests are almost always necessary for diagnosing cancer. These include:
Many health care providers encourage women to get a baseline mammogram at age 40 and every year thereafter. Women who have a family history of breast cancer should discuss their risk factors with their health care providers and consider screenings at an earlier age.
Similarly for lung cancer, the most common cancer in the world, any individual with a significant smoking history should get a CT scan between 50 and 80 years old. During a CT scan, you lie on a table and an X-ray machine uses a low dose of radiation to make detailed images of your lungs.
The importance of early cancer detection
Regardless of your type of cancer, early detection is key to ensuring optimal outcomes.
"Early detection of cancer is important because cancer is typically divided into various stages depending on what part of the body it's involving," says Nebraska Medicine cancer doctor, Apar Kishor Ganti, MD, MS. "So, if we can detect the cancer at an earlier stage when a patient doesn't have as many symptoms, there is a better chance their cancer can be cured."
Worried about cancer?
Call 402.559.5600 to book an appointment with a cancer specialist.
Screenings save thousands of lives. By detecting treatable illnesses with an inexpensive screening, many people can avoid costly hospital care, surgeries and even terminal disease. See if you're overdue for any of these 13 critical screenings. | Can blood tests help detect cancer?
Breadcrumb
Blood tests can provide clues to help your health care team identify and treat your cancer, but they shouldn't be used on their own. Typically, other tests are necessary. Below, we explain how blood tests can be used in conjunction with other cancer screenings as well as the type of tests that are best suited to help diagnose and manage cancer.
The role of blood tests in cancer diagnosis and treatment
Aside from leukemia, a broad term for cancers of the blood cells, most cancers cannot be detected during routine blood work. However, blood tests can provide helpful information about:
Overall health
Organ function
Chemicals and proteins in your blood that might indicate cancer
Levels of blood cells that are too high or too low, perhaps because of cancer
Whether treatment is effective or if the disease is progressing
Whether cancer has come back
All blood tests can be done in a doctor's office, clinic or hospital setting, and are typically performed by nurses or technicians. However, they can be administered by a variety of health care providers.
The most effective tests for detecting cancer
Although blood tests are useful, other tests are almost always necessary for diagnosing cancer. These include:
Many health care providers encourage women to get a baseline mammogram at age 40 and every year thereafter. Women who have a family history of breast cancer should discuss their risk factors with their health care providers and consider screenings at an earlier age.
Similarly for lung cancer, the most common cancer in the world, any individual with a significant smoking history should get a CT scan between 50 and 80 years old. During a CT scan, you lie on a table and an X-ray machine uses a low dose of radiation to make detailed images of your lungs.
The importance of early cancer detection
Regardless of your type of cancer, early detection is key to ensuring optimal outcomes.
"Early detection of cancer is important because cancer is typically divided into various stages depending on what part of the body it's involving," says Nebraska Medicine cancer doctor, Apar Kishor Ganti, MD, MS. | no |
Hematology | Can routine blood tests detect cancer? | yes_statement | "routine" "blood" "tests" can "detect" "cancer".. "cancer" can be "detected" through "routine" "blood" "tests". | https://www.talktomira.com/post/do-routine-blood-tests-detect-cancer | Do Routine Blood Tests Detect Cancer? | Mira | Explore Categories
Explore Tags
Do Routine Blood Tests Detect Cancer?
Routine blood work can detect early signs of cancers, particularly blood cancers such as leukemia and lymphoma. Routine blood tests are recommended for healthy individuals. They can also give insight into organ function, diet, metabolism, and even detect signs of cancer. Four types of blood tests detect cancer, as explained in this article.
Purpose of Routine Blood Work
Routine blood work refers to blood tests ordered by your doctor as part of your yearly physical. They are used to screen for a range of health conditions, helping you make informed diet, lifestyle, and fitness choices. Routine blood work can also detect illness before symptoms arise. A routine complete blood count test (CBC) is also commonly referred to as routine blood work.
Factors That Influence Recommended Blood Tests
Not everyone is recommended the same blood tests. Your family history, age, sex, personal risk factors, and current health status influence the frequency and type of blood tests a doctor might recommend. Your doctor will use this information to help figure out what tests will benefit your health.
Circulating tumor cell tests: helps monitor breast, prostate, and colorectal cancers in case they are spreading, but the technology is still in development.
Information Gained from Blood Tests for Cancer
Blood testing is one of the many tools that doctors use to diagnose and manage cancer. Blood tests provide information about:
Overall health status
Organ function
Stage of cancer
Abnormal levels of chemicals and proteins in your blood that may indicate cancer
High or low blood cell count (possibly due to cancer)
Treatment options depending on the type and severity of cancer
If cancer has come back
Whether treatment is working or the disease is further developing
Although blood tests are useful in cancer diagnosis, other tests are necessary to confirm a diagnosis. Other tests to diagnose cancer include biopsies, x-rays, CT scans, MRIs, physical exams, mammograms, and pap smears.
Get Mira - Health Benefits You Can Afford.
How Blood Protein Testing Works
Blood protein testing uses a process called electrophoresis to measure two types of proteins in the blood: globulin and albumin. Albumin is the most abundant protein in the blood. Low levels of albumin can signal myeloma as the cancer may block its production. High levels of globulin can signal myeloma as it can cause an increase in production of globulin.
Who Should Receive Blood Protein Testing
Blood protein tests may be ordered as part of your routine health checkup if you have unexplained weight loss, fatigue, edema (swelling caused by extra fluid in your tissues), and kidney or liver disease symptoms.
Tumor Marker Tests
Tumor marker tests can diagnose specific types of cancer and help inform treatment options but are not perfect. The results will likely require additional testing since they are not straightforward.
For example, people without cancer might have high tumor marker levels, and those with cancer may not always have increased tumor marker levels. Some known tumor markers include ones for: liver, thyroid, ovarian, breast, colorectal, lung, stomach, pancreatic, and testicular cancer.
How Tumor Marker Tests Work
Tumor marker tests detect the presence of tumor markers for various cancers. Tumor markers are substances made by your bodyâs normal response to cancer or cancerous cells. Tumor markers can indicate a specific type of cancer or several different types. Scientists are still learning about known tumor markers and researching new ones.
Who Should Receive Tumor Marker Tests
Tumor marker tests are used to screen people at high risk of cancer due to family history and/or previous diagnosis of another type of cancer. They are most often used to guide treatment decisions, check treatment progress, predict the chance of recovery, and watch for recurrence.
Circulating Tumor Blood Tests (CTC)
CTC tests help monitor breast, prostate, and colorectal cancers. A CTC test helps assess the course of the disease and can be used to measure treatment efficacy.
How Circulating Tumor Blood Tests Work
The technology behind CTC tests is still in development and is designed to look for circulating tumor cells. Circular tumor cells are cells that have broken off of a tumor and are in the bloodstream, which may indicate the cancer is spreading.
Who Should Receive Circulating Tumor Blood Tests
Doctors may order a CTC test for patients who have been diagnosed with breast, prostate, or colorectal cancer. CTC tests are conducted before starting treatment and during the treatment period.
Virtual care for only $25 per visit
Virtual primary care, urgent care, and behavioral health visits are only $25 with a Mira membership.
Not all cancers can be detected with blood tests either. For example, to diagnose skin cancer, patients will always require a skin biopsy. Breast cancer is diagnosed with imaging, specifically mammograms, and pap smears are used to screen for cervical cancer.
Routine Blood Work Frequently Asked Questions (FAQs)
Routine blood work can give insight into your overall health and help you make informed health decisions. If it has been a while since your last blood test, you may have some of the questions below.
What are other common blood tests?
Other commonly recommended blood tests are A1C tests used to diagnose diabetes, lipid panels for assessing risk of heart disease, and STD panels for reproductive diseases.
Where can I go to get blood work done?
You can get blood work done in a health center, doctorâs office, clinic, lab, or hospital. Blood tests are performed by many healthcare providers, usually lab technicians and nurses. Getting your blood drawn can be stressful, but fortunately, it only takes about five minutes,
How often do I need to get blood work done?
Personal factors such as age, sex, medications, family history, and current health status influence the frequency and type of blood work recommend. In general, adults should get routine blood work once a year. If blood test results come back abnormal, a follow-up with a doctor is necessary.
What treatment options are available for cancer?
Many different cancer treatments exist, with some people receiving one treatment and others receiving a combination of treatments. Cancer treatment utilizes surgery, radiation, medications, and other therapies to cure, shrink, or stop cancer progression.
Bottom Line
Routine blood work can detect some cancers, especially blood cancers like leukemia, lymphoma, and multiple myeloma. A complete blood count test can detect cancer as well as many other health conditions to give insight into your overall health. Blood tests are also commonly used to monitor and assess cancer once a patient has already been diagnosed. Not everyone is recommended the same frequency and type of blood work.
Blood work can be costly without insurance, yet it is crucial to detect illnesses before symptoms arise. Luckily, Mira offers comprehensive blood work for only $170, which covers essential lab screening for non-members, including a complete blood count test. Members receive up to 80 percent off of over 1,000 prescription medications and same-day lab testing for only $45 per month. Sign up today to get started.
Erica graduated from Emory University in Atlanta with a BS in environmental science and a minor in English and is on track to graduate with her Master's in Public Health. She is passionate about health equity, women's health, and how the environment impacts public health. | Explore Categories
Explore Tags
Do Routine Blood Tests Detect Cancer?
Routine blood work can detect early signs of cancers, particularly blood cancers such as leukemia and lymphoma. Routine blood tests are recommended for healthy individuals. They can also give insight into organ function, diet, metabolism, and even detect signs of cancer. Four types of blood tests detect cancer, as explained in this article.
Purpose of Routine Blood Work
Routine blood work refers to blood tests ordered by your doctor as part of your yearly physical. They are used to screen for a range of health conditions, helping you make informed diet, lifestyle, and fitness choices. Routine blood work can also detect illness before symptoms arise. A routine complete blood count test (CBC) is also commonly referred to as routine blood work.
Factors That Influence Recommended Blood Tests
Not everyone is recommended the same blood tests. Your family history, age, sex, personal risk factors, and current health status influence the frequency and type of blood tests a doctor might recommend. Your doctor will use this information to help figure out what tests will benefit your health.
Circulating tumor cell tests: helps monitor breast, prostate, and colorectal cancers in case they are spreading, but the technology is still in development.
Information Gained from Blood Tests for Cancer
Blood testing is one of the many tools that doctors use to diagnose and manage cancer. Blood tests provide information about:
Overall health status
Organ function
Stage of cancer
Abnormal levels of chemicals and proteins in your blood that may indicate cancer
High or low blood cell count (possibly due to cancer)
Treatment options depending on the type and severity of cancer
If cancer has come back
Whether treatment is working or the disease is further developing
Although blood tests are useful in cancer diagnosis, other tests are necessary to confirm a diagnosis. Other tests to diagnose cancer include biopsies, x-rays, CT scans, MRIs, physical exams, mammograms, and pap smears.
Get Mira - Health Benefits You Can Afford.
| yes |
Hematology | Can routine blood tests detect cancer? | yes_statement | "routine" "blood" "tests" can "detect" "cancer".. "cancer" can be "detected" through "routine" "blood" "tests". | https://www.cancer.net/cancer-types/lung-cancer-non-small-cell/diagnosis | Lung Cancer - Non-Small Cell: Diagnosis | Cancer.Net | You are here
Lung Cancer - Non-Small Cell: Diagnosis
ON THIS PAGE: You will find a list of common tests, procedures, and scans that doctors use to find the cause of a medical problem. Use the menu to see other pages.
Doctors use many tests to find, or diagnose, cancer. They also do tests to learn if cancer has spread to another part of the body from where it started. If the cancer has spread, it is called metastasis. Doctors may also do tests to learn which treatments could work best.
For most types of cancer, a biopsy is the only sure way for the doctor to know if an area of the body has cancer. In a biopsy, the doctor takes a small sample of tissue for testing in a laboratory. If a biopsy is not possible, the doctor may suggest other tests that will help make a diagnosis. Lung cancer cannot be detected by routine blood testing, but blood tests may be used to identify genetic mutations in people who are already known to have lung cancer (see "Biomarker testing of the tumor" below).
How NSCLC is diagnosed
There are many tests used for diagnosing non-small cell lung cancer (NSCLC). Not all tests described here will be used for every person. Your doctor may consider these factors when choosing a diagnostic test:
The type of cancer suspected
Your signs and symptoms
Your age and general health
The results of earlier medical tests
Finding out where the cancer started
NSCLC starts in the lungs. Many other types of cancer start elsewhere in the body and can spread to the lungs when they metastasize. For example, breast cancer that has spread to the lungs is still called breast cancer. Therefore, it is important for doctors to know if the cancer started in the lungs or elsewhere.
To find where the cancer started, your doctor will take into account your symptoms and medical history, physical examination, how the tumor looks on x-rays and scans, and your risk factors for cancer. A pathologist can perform tests on the biopsy sample to help find out where the cancer began. Your doctor may recommend other tests to rule out specific types of cancer. If, after these considerations, the doctor is still not sure where the cancer started, the doctor may give a diagnosis of metastatic cancer “of unknown primary.” Most treatments for metastatic cancer of unknown primary that are first found in the chest are the same as those for metastatic lung cancer.
The following tests may be used to diagnose and learn the stage of lung cancer:
Imaging tests
Imaging scans are very important in the care of people with NSCLC. However, no test is perfect, and no scan can diagnose NSCLC. Only a biopsy can do that (see below). Chest x-ray and scan results must be combined with a person’s medical history, a physical examination, blood tests, and information from the biopsy to form a complete story about where the cancer began and if or where it has spread.
Computed tomography (CT or CAT) scan. A CT scanproduces images that allow doctors to see the size and location of a lung tumor and/or lung cancer metastases. A CT scan takes pictures of the inside of the body using x-rays taken from different angles. A computer combines these pictures into a detailed, 3-dimensional image that shows any abnormalities or tumors. A CT scan can be used to measure the tumor’s size. Sometimes, a special dye called a contrast medium is given before the scan to provide better detail on the image. This dye can be injected into a patient’s vein or given as a pill or liquid to swallow.
Positron emission tomography (PET) scan. A PET scan is usually combined with a CT scan (see above), called a PET-CT scan. However, you may hear your doctor refer to this procedure just as a PET scan.A PET scan is a way to create pictures of organs and tissues inside the body. A small amount of a radioactive sugar substance is injected into the patient’s body. This sugar substance is taken up by cells that use the most energy. Because cancer tends to use energy actively, it absorbs more of the radioactive substance. A scanner then detects this substance to produce images of the inside of the body.
Magnetic resonance imaging (MRI) scan. An MRI also produces images that allow doctors to see the location of a lung tumor and/or lung cancer metastases and measure the tumor’s size. An MRI uses magnetic fields, not x-rays, to produce detailed images of the body. A special dye called a contrast medium is given before the scan to create a clearer picture. This dye can be injected into a patient’s vein or given as a pill or liquid to swallow. However, MRI scanning does not work well to take pictures of parts of the body that are moving, like your lungs, which move with each breath you take. For that reason, MRI is rarely used to look at the lungs. It may be helpful to find lung cancer that has spread to the brain or bones.
Bone scan. A bone scan uses a radioactive tracer to look at the inside of the bones. The amount of radiation in the tracer is too low to be harmful. The tracer is injected into a patient’s vein. It collects in areas of the bone and is detected by a special camera. Healthy bone appears lighter to the camera, and areas of injury, such as those caused by cancer, stand out on the image. PET scans (see above) have been replacing bone scans to find NSCLC that has spread to the bones and may not be always recommended.
The procedures that doctors use to collect tissue to diagnose lung cancer and plan treatment are listed below:
Biopsy. A biopsy is the removal of a small amount of tissue for examination under a microscope. It is helpful to have a larger tumor sample to determine the subtype of NSCLC and perform additional molecular testing (see below). If not enough of the tumor is removed to do these tests, another biopsy may be needed. After the biopsy, a pathologist analyzes the sample(s). A pathologist is a doctor who specializes in interpreting laboratory tests and evaluating cells, tissues, and organs to diagnose disease.
Bronchoscopy. In a bronchoscopy, the doctor passes a thin, flexible tube with a light on the end into the mouth or nose, down through the main windpipe, and into the breathing passages of the lungs. A surgeon or a pulmonologist may perform this procedure. A pulmonologist is a medical doctor who specializes in the diagnosis and treatment of lung disease. The tube lets the doctor see inside the lungs. Tiny tools inside the tube can take samples of fluid or tissue so the pathologist can examine them. Often, lymph nodes will be examined and biopsies will be taken using an ultrasound to guide the bronchoscopy. This is called an endobronchial ultrasound (EBUS). Patients are given mild anesthesia during a bronchoscopy. Anesthesia is medication to block the awareness of pain.
Needle aspiration/core biopsy.After numbing the skin, a special type of radiologist, called an interventional radiologist, removes a sample of the lung tumor for testing. This can be done with a smaller needle or a larger needle depending on how large of a sample is needed. The doctor uses the needle to remove a sample of tissue for testing. Often, the radiologist uses a chest CT scan or special x-ray machine called a fluoroscope to guide the needle. In general, a core biopsy provides a larger amount of tissue than a needle aspiration. As explained above, doctors have learned that more tissue is needed in NSCLC for diagnosis and molecular testing.
Thoracentesis.After numbing the skin on the chest, a needle is inserted through the chest wall and into the space between the lung and the wall of the chest where fluid can collect. The fluid is removed and checked for cancer cells by the pathologist.
Thoracoscopy.This procedure is performed in the operating room, and the patient receives general anesthesia. Through a small cut in the skin of the chest wall, a surgeon can insert a special instrument and a small video camera to assist in the examination of the inside of the chest. Patients need general anesthesia for this procedure, but recovery time may be shorter with a thoracoscopy because of the smaller incisions that are used. This procedure may be referred to as video-assisted thoracoscopic surgery or VATS. Another kind of minimally invasive surgery called "robotic-assisted surgery" maybe done instead of a thoracoscopy.
Mediastinoscopy.This is a surgical procedure performed in the operating room, and the patient receives general anesthesia. A surgeon examines and takes a sample of the lymph nodes in the center of the chest underneath the breastbone by making a small incision at the top of the breastbone. This procedure also requires general anesthesia and is done in an operating room.
Thoracotomy.This procedure is performed in an operating room, and the patient receives general anesthesia. A surgeon then makes an incision in the chest, examines the lung directly, and takes tissue samples for testing. A thoracotomy is rarely used to diagnose lung cancer, but it may be necessary to completely remove a lung tumor.
Biomarker testing of the tumor
Your doctor may recommend running tests on a tumor sample to identify specific genes, proteins, and other factors unique to the tumor. This may also be called molecular testing of the tumor.
There are several genes that may have changes, called mutations, in a lung tumor that can help the cancer grow and spread. These mutations are found in the tumor only and not in healthy cells in the body. This means these types of mutations are not inherited or passed down to your children.
Results from these tests and information about the stage of NSCLC you have can help determine if you can receive targeted therapy, which can be directed at specific mutations (see Types of Treatment). Targeted therapies now exist for many different genetic mutations that are known to cause lung cancer and research is ongoing to develop more (see Latest Research).
Genetic mutations that are known to contribute to lung cancer growth often occur on 1 or more of several genes, including EGFR, ALK, KRAS, BRAF, HER2, ROS1, RET, MET, and TRK and testing the tumor for these genes is now common. Certain mutations that can be treated with targeted therapy are much more likely to occur in people with adenocarcinoma NSCLC and those who never smoked. However, people whose have a history of smoking may also have genetic mutations that can be treated with targeted therapy, therefore, it is essential to test for molecular mutations, regardless of a history of smoking.
Your doctor may also recommend PD-L1 testing. PD-L1 is a protein found on the surface of some cancer cells and some of the body's immune cells. This protein stops the body's immune cells from destroying the cancer. Knowing if the tumor has PD-L1 will help your doctor decide if certain types of immunotherapy are more or less likely to be helpful (see Types of Treatment).
Currently, there are different biomarker tests that can be done to determine if you have any genetic changes. Sometimes, there may not be enough tissue to test for all of the mutations. Your health care team may decide to test for the most likely changes or they may need to do another biopsy to get enough tissue. Learn more about biomarker testing in lung cancer.
Liquid biopsy. A type of blood test called a "liquid biopsy" is being used more and more to help diagnose specific genetic changes in people with NSCLC, but it cannot be used to diagnose the cancer itself. This test looks for a type of DNA called "circulating tumor DNA." Like healthy cells, cancer cells die and are replaced. When these dead cells break down, they are released into the bloodstream. A liquid biopsy can detect the small pieces of DNA in the bloodstream from these cells.
Liquid biopsies are less invasive than other types of biopsies and have less risks. Liquid biopsies can be done as a part of your initial diagnosis and they can be done multiple times throughout treatment. Learn more about liquid biopsy and what to expect.
After diagnostic tests are done, your doctor will review all of the results with you. If the diagnosis is cancer, these results also help the doctor describe the cancer. This is called staging.
Coping with an NSCLC diagnosis
For most patients, a diagnosis of NSCLC is extremely stressful. Some people who are diagnosed with NSCLC develop anxiety and, less commonly, depression. You and your families should not be afraid to talk with the health care team about how you feel. The health care team has special training and experience that can make things easier for patients and their families and is there to help.
In addition to providing information and emotional support, your doctor may include supportive services and palliative care specialists in your care. This team could include a counselor, psychologist, social worker, or psychiatrist.
You and your family may also find resources available in the community to help people living with lung cancer, such as support groups. Some patients feel comfortable discussing their disease and experiences throughout treatment with their health care team, family, friends, or other patients through a support group. These patients may also join a support group or advocacy group in order to increase awareness about lung cancer and to help fellow patients who are living with this disease.
A NSCLC diagnosis is serious. However, doctors can offer effective treatment for the cancer. In addition, advances being made in the diagnosis and treatment of NSCLC that provide more and more patients with a chance for a cure.
Stopping smoking
Even after NSCLC is diagnosed, it is still beneficial to quit smoking. People who stop smoking have an easier time with all treatments, feel better, live longer, and have a lower risk of developing a second lung cancer or other health problems. It is never easy to stop smoking and even harder when facing the diagnosis of NSCLC. If you smoke, seek help from family, friends, programs for quitting smoking, and health care professionals. None of the products available to quit smoking interfere with cancer treatment. Learn more about stopping tobacco use after a cancer diagnosis in a separate section of this website.
The next section in this guide is Stages. It explains the system doctors use to describe the extent of the disease. Use the menu to choose a different section to read in this guide. | This protein stops the body's immune cells from destroying the cancer. Knowing if the tumor has PD-L1 will help your doctor decide if certain types of immunotherapy are more or less likely to be helpful (see Types of Treatment).
Currently, there are different biomarker tests that can be done to determine if you have any genetic changes. Sometimes, there may not be enough tissue to test for all of the mutations. Your health care team may decide to test for the most likely changes or they may need to do another biopsy to get enough tissue. Learn more about biomarker testing in lung cancer.
Liquid biopsy. A type of blood test called a "liquid biopsy" is being used more and more to help diagnose specific genetic changes in people with NSCLC, but it cannot be used to diagnose the cancer itself. This test looks for a type of DNA called "circulating tumor DNA." Like healthy cells, cancer cells die and are replaced. When these dead cells break down, they are released into the bloodstream. A liquid biopsy can detect the small pieces of DNA in the bloodstream from these cells.
Liquid biopsies are less invasive than other types of biopsies and have less risks. Liquid biopsies can be done as a part of your initial diagnosis and they can be done multiple times throughout treatment. Learn more about liquid biopsy and what to expect.
After diagnostic tests are done, your doctor will review all of the results with you. If the diagnosis is cancer, these results also help the doctor describe the cancer. This is called staging.
Coping with an NSCLC diagnosis
For most patients, a diagnosis of NSCLC is extremely stressful. Some people who are diagnosed with NSCLC develop anxiety and, less commonly, depression. You and your families should not be afraid to talk with the health care team about how you feel. The health care team has special training and experience that can make things easier for patients and their families and is there to help.
In addition to providing information and emotional support, your doctor may include supportive services and palliative care specialists in your care. | no |
Hematology | Can routine blood tests detect cancer? | yes_statement | "routine" "blood" "tests" can "detect" "cancer".. "cancer" can be "detected" through "routine" "blood" "tests". | https://pubmed.ncbi.nlm.nih.gov/30033446/ | Routine blood investigations have limited utility in surveillance of ... | Methods:
We conducted a multi-centre retrospective analysis of all patients diagnosed with aggressive lymphoma treated with curative-intent chemotherapy who achieved CR for at least 3 months between 2000 and 2015. An abnormal blood test was defined as any new and unexplained abnormality for full blood examination, lactate dehydrogenase or erythrocyte sedimentation rate.
Results:
Three hundred and forty-six patients attended a total of 3084 outpatient visits; blood tests were performed at 90% of these appointments. Fifty-six (16%) patients relapsed. Routine laboratory testing detected relapse in only three patients (5% of relapses); in the remaining patients, relapse was suspected clinically (80%) or detected by imaging (15%). The sensitivity of all blood tests was 42% and the positive predictive value was 9%. No significant difference in survival was shown in patients who underwent a routine blood test within 3 months prior to relapse versus those who did not (p = 0.88).
Conclusions:
Routine blood tests demonstrate unacceptably poor performance characteristics, have no impact on survival and thus have limited value in the detection of relapse in routine surveillance.
Conflict of interest statement
The authors declare no competing interests.
Figures
Fig. 1
Survival curve in relapsed patients.…
Fig. 1
Survival curve in relapsed patients. This shows the post-relapse survival based on whether…
Fig. 1
Survival curve in relapsed patients. This shows the post-relapse
survival based on whether a blood test was performed in preclinical period
(3 months) for relapse | Methods:
We conducted a multi-centre retrospective analysis of all patients diagnosed with aggressive lymphoma treated with curative-intent chemotherapy who achieved CR for at least 3 months between 2000 and 2015. An abnormal blood test was defined as any new and unexplained abnormality for full blood examination, lactate dehydrogenase or erythrocyte sedimentation rate.
Results:
Three hundred and forty-six patients attended a total of 3084 outpatient visits; blood tests were performed at 90% of these appointments. Fifty-six (16%) patients relapsed. Routine laboratory testing detected relapse in only three patients (5% of relapses); in the remaining patients, relapse was suspected clinically (80%) or detected by imaging (15%). The sensitivity of all blood tests was 42% and the positive predictive value was 9%. No significant difference in survival was shown in patients who underwent a routine blood test within 3 months prior to relapse versus those who did not (p = 0.88).
Conclusions:
Routine blood tests demonstrate unacceptably poor performance characteristics, have no impact on survival and thus have limited value in the detection of relapse in routine surveillance.
Conflict of interest statement
The authors declare no competing interests.
Figures
Fig. 1
Survival curve in relapsed patients. …
Fig. 1
Survival curve in relapsed patients. This shows the post-relapse survival based on whether…
Fig. 1
Survival curve in relapsed patients. | no |
Hematology | Can routine blood tests detect cancer? | yes_statement | "routine" "blood" "tests" can "detect" "cancer".. "cancer" can be "detected" through "routine" "blood" "tests". | https://my.clevelandclinic.org/health/diseases/22213-metastasis-metastatic-cancer | Metastasis (Metastatic Cancer): Definition, Biology & Types | Metastasis (Metastatic Cancer)
Metastatic cancer occurs when cancer cells break off from the original tumor, enter your bloodstream or lymph system and spread to other areas of your body. Most metastatic cancers are manageable, but not curable. Treatment can ease your symptoms, slow cancer growth and improve your quality of life.
Overview
What is metastatic cancer?
Metastatic cancer refers to cancer that has spread beyond the point of origin to other, distant areas of the body. To fully understand metastatic cancer, we’ll first define metastasis:
Metastasis is a word used to describe the spread of cancer. Unlike normal cells, cancer cells have the ability to grow outside of the place in your body where they originated. When this happens, it’s called metastatic cancer, advanced cancer or Stage IV cancer. Nearly all types of cancer have the potential to metastasize, but whether they do depends on a number of factors. Metastatic tumors (metastases) can occur in three ways:
What are the most common sites of metastatic cancer?
The most common sites for cancers to metastasize include the lungs, liver, bones and brain. Other places include the adrenal gland, lymph nodes, skin and other organs.
Sometimes, a metastasis will be found without a known primary cancer (point of origin). In this situation, your healthcare provider will search extensively for the primary cancer source. If none can be found, it’s called cancer of unknown primary (CUPS).
Symptoms and Causes
What are the symptoms of metastatic cancer?
Some people will have minimal or no symptoms of metastatic cancer. If symptoms are present, they’re based on the location of the metastasis.
Bone metastasis
Bone metastasis may or may not cause pain. The first sign of bone metastasis is bone breakage after a minor injury or no injury. Severe back pain accompanied by leg numbness or difficulty with bowel or bladder control must be evaluated immediately.
Brain metastasis
If a tumor has metastasized to the brain, symptoms may include headache, dizziness, visual problems, speech problems, nausea, difficulty walking or confusion.
Liver metastasis
Liver metastasis can cause pain, weight loss, nausea, loss of appetite, abdominal fluid (ascites) or jaundice (yellowing of the skin and the whites of eyes).
What causes metastatic cancer and how does it spread?
Metastatic cancer occurs when cancer cells break off from the original tumor and spread to other parts of the body via bloodstream or lymph vessels.
Diagnosis and Tests
What tests will my healthcare provider use to diagnose metastatic cancer?
There is no standard test to check for metastasis. Your healthcare provider will order tests based on the type of cancer you have and the symptoms you’ve developed.
Blood tests
Routine blood tests can tell your provider if your liver enzymes are elevated. This could indicate liver metastasis. In many cases, however, these blood test results are normal, even in the presence of advanced cancer.
Tumor markers
Some cancers have tumor markers that can be helpful in monitoring cancer after diagnosis. If tumor marker levels increase, it could mean that your cancer is advancing. Some examples are:
There are several tumor markers that are less specific, and therefore, not used as a tool for diagnosing metastasis.
Imaging
There are many tests that “take pictures” of the inside of your body. Appropriate tests depend on the symptoms and the type of cancer. Imaging tests may include:
Ultrasound is one way to evaluate the abdomen and identify any tumors. It can detect fluid in the abdomen and can show the difference between fluid-filled cysts and solid masses.
CT scan (computed tomography) can detect abnormalities in the head, neck, chest, abdomen and pelvis. It can also identify tumors in the lungs, liver or lymph nodes.
A bone scan is done with a radioactive tracer that attaches to damaged bones and shows as a “hot spot” on the scan. It’s most useful for evaluating the whole body for evidence of cancer-related bone damage. If your provider suspects a fracture, they may take additional X-rays to determine the extent of the damage.
PET scan (positron emission tomography) works to identify abnormalities anywhere in the body. It uses a special dye containing radioactive tracers that "light up” problematic areas.
The results of these tests may not provide definitive answers. In some cases, your healthcare provider may also take a biopsy (a small tissue sample) of the suspected metastatic tumor.
Management and Treatment
How is metastatic cancer treated?
Metastasis is treated based on the original site of cancer. For example, if a person has breast cancer and cancer spreads to their liver, it is still treated the same way as breast cancer. This is because the cancer cells themselves haven’t changed — they’re just living in a new place.
In some cases, your provider may treat metastatic tumors in specific ways.
Bone metastasis
If bone tumors aren’t causing pain, your provider may monitor your situation or recommend drug therapy. If there is pain or if the bone tissue is weak, your provider may recommend radiation therapy.
Brain metastasis
Lung metastasis
The treatment of metastatic tumors in the lung depends on the specific situation. In most cases, it will be treated with the same drugs as the primary cancer (where cancer originated). If fluid builds up around the lungs, a procedure called thoracentesis can make breathing easier.
Liver metastasis
There are a number of ways to treat metastatic tumors of the liver. The appropriate treatment depends on the type of primary cancer and the number of metastatic tumors. In many cases, your provider will treat liver metastases the same way they treated the primary tumor. If the disease hasn’t spread too far, then your provider may recommend surgery or radiofrequency ablation (RFA). Organ transplant is generally not an option for metastatic disease.
Prevention
Can I prevent metastatic cancer?
When cancer is detected at an earlier stage, systemic treatments given in addition to surgery (often called adjuvant or neoadjuvant treatment) may be recommended to reduce the likelihood of developing metastasis. These treatments may include chemotherapy, hormonal treatments or immunotherapy. Research is ongoing in these areas and experts are trying to find ways to slow, stop or prevent the spread of cancer cells.
Outlook / Prognosis
What can I expect if I have metastatic cancer?
Your healthcare provider will work closely with you. They’ll monitor your symptoms and find treatments to ease them. You’ll probably have many medical visits and will need to make important decisions regarding your overall health.
Is metastatic cancer curable?
In most cases, metastatic cancer is not curable. However, treatment can slow growth and ease many of the associated symptoms. It’s possible to live for several years with some types of cancer, even after it has metastasized. Some types of metastatic cancer are potentially curable, including melanoma and colon cancer.
What is the metastatic cancer survival rate?
The five-year survival rate of metastatic cancer depends on the type of cancer you have. For example, the five-year survival rate for metastatic lung cancer is 7%. This means that 7% of people diagnosed with metastatic lung cancer are still alive five years later. Meanwhile, the five-year survival rate of metastatic breast cancer is 28% for women and 22% for men.
Living With
How do I take care of myself?
Being diagnosed with metastatic cancer comes with many challenges. These challenges vary from person to person, but you might:
Feel sad, angry or hopeless.
Worry that treatment won’t work and that your cancer will get worse quickly.
Get tired of going to so many appointments and making so many important decisions.
Need help with daily routines.
Feel frustrated about the cost of your treatment.
Talking with a counselor or social worker can help you cope with these complicated emotions. Managing stress is also an important aspect of self-care. Practice meditation, mindfulness or find other ways to reduce stress and anxiety.
When should I see my healthcare provider?
If you have metastatic cancer and you develop new symptoms, call your healthcare provider right away. They can adjust your treatment to meet your specific needs.
What questions should I ask my doctor?
Learning about your condition can empower you to make informed decisions. Some people only want to know the basics, while other people prefer to know every detail about their prognosis. Here are some questions you may want to ask your healthcare provider:
Are there things I can do to improve my prognosis?
What are my treatment options?
Are there clinical trial options that might be appropriate for me?
Will palliative care continue even if I stop cancer treatments?
How often will I need to schedule follow-up appointments?
Do I need to consider hospice care?
Should I choose a person to make medical decisions for me when I’m unable to make them for myself?
What legal documents should I have in place?
What resources are available to help me cope with my prognosis?
A note from Cleveland Clinic
A metastatic cancer diagnosis is one of the scariest things you may ever encounter. If you or a family member has been diagnosed with advanced cancer, you’re probably feeling a lot of complicated emotions. While most metastatic cancers aren’t curable, there are treatments that can ease your symptoms and prolong your life. Ask your healthcare provider for resources and consider joining a local support group. Talking with other people who are going through the same thing can be healing during this emotionally difficult time. | Liver metastasis
Liver metastasis can cause pain, weight loss, nausea, loss of appetite, abdominal fluid (ascites) or jaundice (yellowing of the skin and the whites of eyes).
What causes metastatic cancer and how does it spread?
Metastatic cancer occurs when cancer cells break off from the original tumor and spread to other parts of the body via bloodstream or lymph vessels.
Diagnosis and Tests
What tests will my healthcare provider use to diagnose metastatic cancer?
There is no standard test to check for metastasis. Your healthcare provider will order tests based on the type of cancer you have and the symptoms you’ve developed.
Blood tests
Routine blood tests can tell your provider if your liver enzymes are elevated. This could indicate liver metastasis. In many cases, however, these blood test results are normal, even in the presence of advanced cancer.
Tumor markers
Some cancers have tumor markers that can be helpful in monitoring cancer after diagnosis. If tumor marker levels increase, it could mean that your cancer is advancing. Some examples are:
There are several tumor markers that are less specific, and therefore, not used as a tool for diagnosing metastasis.
Imaging
There are many tests that “take pictures” of the inside of your body. Appropriate tests depend on the symptoms and the type of cancer. Imaging tests may include:
Ultrasound is one way to evaluate the abdomen and identify any tumors. It can detect fluid in the abdomen and can show the difference between fluid-filled cysts and solid masses.
CT scan (computed tomography) can detect abnormalities in the head, neck, chest, abdomen and pelvis. It can also identify tumors in the lungs, liver or lymph nodes.
A bone scan is done with a radioactive tracer that attaches to damaged bones and shows as a “hot spot” on the scan. It’s most useful for evaluating the whole body for evidence of cancer-related bone damage. If your provider suspects a fracture, they may take additional X-rays to determine the extent of the damage. | no |
Hematology | Can routine blood tests detect cancer? | yes_statement | "routine" "blood" "tests" can "detect" "cancer".. "cancer" can be "detected" through "routine" "blood" "tests". | https://www.va.gov/pittsburgh-health-care/news-releases/new-cancer-screening-available-for-veterans/ | New Cancer Screening Available For Veterans | VA Pittsburgh ... | New Cancer Screening Available for Veterans
Pittsburgh
, PA — Veterans will soon have a chance to test a new cancer screening tool — all through a blood draw.
The Department of Veterans Affairs and the Veterans Health Foundation have partnered with GRAIL, LLC, to provide veterans access to GRAIL’s groundbreaking multi-cancer early detection (MCED) blood test. GRAIL will make its Galleri MCED test available to 10,000 veterans across approximately 10 sites over the next three years. VA Pittsburgh Healthcare System will pilot the program.
The blood-screening tests will be offered as part of the REFLECTION clinical real-world evidence study. The study will review if Galleri, along with other standard cancer screenings, can find cancers at an early stage when treatment is most likely to be successful.
Nationwide, 1.2 million veterans who have used VA health care since the beginning of fiscal year 2021 have a cancer diagnosis. That number includes 14% of Veterans treated at VA Pittsburgh in the same time frame.
“Cancer is a leading cause of illness and death for veterans,” said VA Pittsburgh pulmonologist Dr. Charles Atwood, lung cancer screening director and lead researcher on the REFLECTION study. “Our partnership with GRAIL and the Galleri test will help VA expand its efforts in cancer early detection.”
Atwood said the multi-cancer early detection tests will be provided to veterans in addition to current recommended screenings. The aim, he said, is to improve early diagnoses and outcomes.
Early detection of cancer is known to improve outcomes, but most are found in late stages because just five types have recommended screenings — breast, cervical, colon, lung and prostate. In a clinical study, the Galleri test demonstrated the ability to detect more than 50 types of cancer, over 45 of which lack recommended screening tests today, with a low false positive rate of less than 1%. The test also determines the origin of the cancer with high accuracy.
Veterans interested in the Galleri test and REFLECTION study can ask their primary care provider for more information on how to participate.
“We are thrilled to partner with the VA and U.S. veterans for this important evaluation of the Galleri test, alongside recommended standard screenings, for its potential to transform early cancer detection in this at-risk population,” said Bob Ragusa, chief executive officer at GRAIL. “The partnership will help veterans access the REFLECTION registry study and receive a test we hope will lead to more cancer diagnoses at an earlier stage, when treatment is more likely to be successful.”
###
ABOUT VA PITTSBURGH HEALTHCARE SYSTEM
VA Pittsburgh Healthcare System (VAPHS) is one of the largest and most progressive VA health care systems in the nation. More than 4,000 employees serve nearly 80,000 veterans every year, providing a range of services from complex transplant medicine to routine primary care. VAPHS is a leader in virtual care delivery through telehealth technology; a center of research and learning with 130 research investigators and $14.8 million in funding in fiscal year 2021; and a provider of state-of-the-art health care training to some 1,500 student trainees annually. VAPHS provides care at medical centers in Pittsburgh's Oakland neighborhood and nearby O’Hara Township, both in Pennsylvania, and five outpatient clinics in Belmont County, Ohio, and Beaver, Fayette, Washington and Westmoreland counties in Pennsylvania. An additional site of care is expected to open in Monroeville, Pennsylvania, in 2023. Veterans can call 412-360-6162 to check eligibility or enrollment. Stay up to date at pittsburgh.va.gov, Facebook and Twitter.
ABOUT VETERANS HEALTH FOUNDATION
Established in 1991, the Veterans Health Foundation (VHF), formerly the Veterans Research Foundation of Pittsburgh, facilitates and supports extramural research and educational activities by collaborating with VA Pittsburgh Healthcare System, private companies, government agencies, foundations and academic institutions. Title 38 USC §7361-7366 authorizes VA medical centers to establish nonprofit research and education corporations to accept and administer private and non-VA federal funds in support of VA's research and education missions. The congressional intent in enabling these corporations is to provide VA facilities with a flexible funding mechanism for the conduct of research as well as staff and patient education.
ABOUT GRAIL
GRAIL is a health care company whose mission is to detect cancer early, when it can be cured. GRAIL is focused on alleviating the global burden of cancer by developing pioneering technology to detect and identify multiple deadly cancer types early. The company is using the power of next-generation sequencing, population-scale clinical studies, and state-of-the-art computer science and data science to enhance the scientific understanding of cancer biology, and to develop its multi-cancer early detection blood test. GRAIL is headquartered in Menlo Park, California, with locations in Washington, D.C., North Carolina, and the United Kingdom. GRAIL, LLC, is a wholly-owned subsidiary of Illumina, Inc. (NASDAQ:ILMN). For more information, please visit grail.com.
ABOUT GALLERI®
The earlier that cancer is detected, the higher the chance of successful outcomes. The Galleri multi-cancer early detection test can detect cancer signals across more than 50 types of cancer, as defined by the American Joint Committee on Cancer Staging Manual, through a routine blood draw. When a cancer signal is detected, the Galleri test predicts the cancer signal origin, or where the cancer is located in the body, with high accuracy to help guide the next steps to diagnosis. The Galleri test requires a prescription from a licensed health care provider and should be used in addition to recommended cancer screenings such as mammography, colonoscopy, prostate-specific antigen (PSA) test, or cervical cancer screening. It is intended for use in people with an elevated risk of cancer, such as those aged 50 or older. For more information about Galleri, visit galleri.com.
Important Galleri Safety Information The Galleri test is recommended for use in adults with an elevated risk for cancer, such as those aged 50 or older. The Galleri test may not detect a cancer signal across all cancers and should be used in addition to routine cancer screening tests recommended by a health care provider. Galleri is intended to detect cancer signals and predict where in the body the cancer signal is located. Use of Galleri is not recommended in individuals who are pregnant, 21 years old or younger, or undergoing active cancer treatment.
Results should be interpreted by a health care provider in the context of medical history, clinical signs and symptoms. A test result of “Cancer Signal Not Detected” does not rule out cancer. A test result of “Cancer Signal Detected” requires confirmatory diagnostic evaluation by medically established procedures (e.g. imaging) to confirm cancer.
If cancer is not confirmed with further testing, it could mean that cancer is not present or testing was insufficient to detect cancer, including due to the cancer being located in a different part of the body. False-positive (a cancer signal detected when cancer is not present) and false-negative (a cancer signal not detected when cancer is present) test results do occur. Rx only.
Laboratory/Test Information GRAIL’s clinical laboratory is certified under the Clinical Laboratory Improvement Amendments of 1988 (CLIA) and accredited by the College of American Pathologists. The Galleri test was developed, and its performance characteristics were determined by GRAIL. The Galleri test has not been cleared or approved by the U.S. Food and Drug Administration. GRAIL’s clinical laboratory is regulated under CLIA to perform high-complexity testing. The Galleri test is intended for clinical purposes. | ILMN). For more information, please visit grail.com.
ABOUT GALLERI®
The earlier that cancer is detected, the higher the chance of successful outcomes. The Galleri multi-cancer early detection test can detect cancer signals across more than 50 types of cancer, as defined by the American Joint Committee on Cancer Staging Manual, through a routine blood draw. When a cancer signal is detected, the Galleri test predicts the cancer signal origin, or where the cancer is located in the body, with high accuracy to help guide the next steps to diagnosis. The Galleri test requires a prescription from a licensed health care provider and should be used in addition to recommended cancer screenings such as mammography, colonoscopy, prostate-specific antigen (PSA) test, or cervical cancer screening. It is intended for use in people with an elevated risk of cancer, such as those aged 50 or older. For more information about Galleri, visit galleri.com.
Important Galleri Safety Information The Galleri test is recommended for use in adults with an elevated risk for cancer, such as those aged 50 or older. The Galleri test may not detect a cancer signal across all cancers and should be used in addition to routine cancer screening tests recommended by a health care provider. Galleri is intended to detect cancer signals and predict where in the body the cancer signal is located. Use of Galleri is not recommended in individuals who are pregnant, 21 years old or younger, or undergoing active cancer treatment.
Results should be interpreted by a health care provider in the context of medical history, clinical signs and symptoms. A test result of “Cancer Signal Not Detected” does not rule out cancer. A test result of “Cancer Signal Detected” requires confirmatory diagnostic evaluation by medically established procedures (e.g. imaging) to confirm cancer.
If cancer is not confirmed with further testing, it could mean that cancer is not present or testing was insufficient to detect cancer, including due to the cancer being located in a different part of the body. False-positive (a cancer signal detected when cancer is not present) and false-negative (a cancer signal not detected when cancer is present) test results do occur. | yes |
Hematology | Can routine blood tests detect cancer? | yes_statement | "routine" "blood" "tests" can "detect" "cancer".. "cancer" can be "detected" through "routine" "blood" "tests". | https://www.roswellpark.org/cancertalk/201907/how-detect-skin-cancer | How to Detect Skin Cancer | Roswell Park Comprehensive Cancer ... | Download our free mobile app — exclusively for physicians and physician offices — for direct access to a directory of Roswell Park physicians. Email or call to refer a patient or ask a question with a click of a button.
How to Detect Skin Cancer
When it comes to skin cancer, we have some good news and some bad news.
First, the bad news: skin cancer is the most commonly diagnosed cancer in the United States. Each year, nearly 5 million people are treated for skin cancer, and in the last three decades, more Americans have had skin cancer than all other cancers combined.
But here’s the good news: You can often see the early warning signs of skin cancer...without an x-ray or blood test or special diagnostic procedure. If you know what to look for and take action when you see it, most skin cancers can be detected and treated at early stages, when they are most curable.
Even for melanoma, a more dangerous skin cancer type that is more likely to spread to other body areas, the five-year survival rate is 99% for people whose melanoma is detected and treated before it spreads to the lymph nodes.
How Can I Detect Skin Cancer?
The first answer is to simply look at your skin. Because you see your skin every day, you are detector number one. By knowing what is normal for your skin, and then thoroughly inspecting it on a regular – usually monthly – basis, many skin cancers can be self- detected.
When examining your skin, take note of all existing spots, moles and freckles on your skin, so that you’ll know when changes occur or a new one appears. You can track these easily with this body mole map from the American Academy of Dermatology. Stand in front of mirror and examine your front and back, head to toe. Bend your elbows and look carefully at your forearms, palms and the back of your upper arms. Use a hand mirror (and ask someone for help) to check the back of your neck, scalp, buttocks and other hard-to-see places. Don’t forget the bottoms of your feet and between your toes.
What Should I Look for When Checking my Skin?
Look for any new moles or changes in your skin, especially any of the following:
A change in size, shape, and/or color of an existing mole, lump or growth
A sore that doesn’t heal
A red or brown patch that’s rough and scaly
A pink pearly bump that bleeds easily
Any mole or spot that is asymmetrical, or has an irregular border or uneven color
Any mole or spot larger than ¼ of an inch (size of a pencil eraser)
Should I Use a Skin Cancer Detection App?
Anything that reminds you to look for signs of skin cancer is a good thing. However, some smartphone apps claim to be able to assess certain skin changes and inform individuals whether such changes warrant a visit to a dermatologist for further analysis.
Thus far, the accuracy of these is not high enough and relying solely on an app, rather than on your own observations and visits to a doctor, you could put yourself at risk by delaying a visit to the doctor when one is warranted. In one recent study, the most accurate skin cancer detection app missed almost 30% of melanomas, diagnosing them as low-risk lesions.
However, these apps are evolving, and one day they could become part of the arsenal to help detect skin cancer. Smartphones can be useful in terms of telemedicine. For instance, in locations where dermatologists may not be readily available, a local physician can send a photo of a suspicious mole to a dermatologist and based on visual inspection and communication with that physician, determine what steps to take next.
Can Blood Tests or Scans Detect Skin Cancer?
Currently, blood tests and imaging scans like MRI or PET are not used as screening tests for skin cancer. However, some national studies are underway to determine if concentrations of skin cancer DNA can be detected by blood tests. Occasionally, imaging detects signs of advanced disease. Sometimes, skin cancer that has spread to internal organs is detected incidentally when a patient is undergoing an imaging study such as MRI or PET scan for unrelated conditions.
What Should I Do if I Have a Suspicious Spot?
Make an appointment with your physician or a dermatologist as soon as possible. If your physician sees something of concern, he or she will usually refer you to a dermatologist. While there are sometimes waiting lists for routine dermatology appointments, in cases where skin cancer is suspected, most dermatologists, including those at Roswell Park, will get you in for a screening as soon as possible.
As part of the physical exam, dermatologists use a dermatoscope, a special magnifying lens and light source held near the skin. If an area is suspicious, the physician will take a biopsy, removing all or part of the abnormal area for examination by a pathologist. At Roswell Park, our dermatopathologists — pathologists who specialize in skin cancers — conduct the laboratory examination and testing of the tissue. The biopsy is usually a minor procedure that includes numbing the area to be tested.
If the diagnosis is melanoma or certain types of squamous cell carcinoma, which have a risk of spreading, additional testing may be required to learn whether the cancer has grown deeper in the skin or has spread to lymph nodes or other parts of the body. These tests may include blood tests, imaging such as MRI, CT or PET scans or procedures, such as lymph node biopsy or removal.
Am I at Risk for Skin Cancer?
Anyone can get skin cancer, regardless of skin color. However, some factors increase your risk, including:
A personal history of skin cancer
Skin that burns, freckles, reddens easily, or becomes painful in the sun
Blue or green eyes
Blond or red hair
Unprotected exposure to sun
A history of indoor tanning
Certain types and a large number of moles
A family history of skin cancer
Having had a lung, heart, kidney, pancreas or liver transplant
Should I Have Routine Skin Cancer Screenings?
While many routine cancer screenings, such as colonoscopies and mammograms, are recommended when a person reaches a certain age, there are no widely adopted age standards for dermatological screenings. Most primary physicians will perform a quick skin check at a routine physical, but we recommend that those with a higher risk for skin cancer have a thorough skin screening by a dermatologist at least once a year. This includes anyone with:
How Can I Prevent Skin Cancer?
For all types of skin cancer, the first lines of defense are awareness and prevention. Prevention steps center on avoiding ultraviolet radiation exposure from both sunlight and tanning beds. This means staying out of the sun, especially when the sun’s rays are strongest, between 11 a.m. and 3 p.m.; using a broad-spectrum water-resistant sunscreen with SPF of at least 30 and covering exposed skin with protective clothing when outdoors, even on a cloudy day.
Roswell Park Comprehensive Cancer Center (“Roswell Park”) takes steps to provide for the accessibility and usability of its websites, mobile applications, and all digital assets contained or offered therein (collectively, “Services”). To this end, Roswell Park works to provide Services that are compatible with commonly used assistive browsers, tools, and technologies. While Roswell Park strives to provide accessibility and usability for all users, please be aware that accessibility is an ongoing effort, and it may not be possible in all areas of Roswell Park’s Services with current technology and other restrictions. | However, these apps are evolving, and one day they could become part of the arsenal to help detect skin cancer. Smartphones can be useful in terms of telemedicine. For instance, in locations where dermatologists may not be readily available, a local physician can send a photo of a suspicious mole to a dermatologist and based on visual inspection and communication with that physician, determine what steps to take next.
Can Blood Tests or Scans Detect Skin Cancer?
Currently, blood tests and imaging scans like MRI or PET are not used as screening tests for skin cancer. However, some national studies are underway to determine if concentrations of skin cancer DNA can be detected by blood tests. Occasionally, imaging detects signs of advanced disease. Sometimes, skin cancer that has spread to internal organs is detected incidentally when a patient is undergoing an imaging study such as MRI or PET scan for unrelated conditions.
What Should I Do if I Have a Suspicious Spot?
Make an appointment with your physician or a dermatologist as soon as possible. If your physician sees something of concern, he or she will usually refer you to a dermatologist. While there are sometimes waiting lists for routine dermatology appointments, in cases where skin cancer is suspected, most dermatologists, including those at Roswell Park, will get you in for a screening as soon as possible.
As part of the physical exam, dermatologists use a dermatoscope, a special magnifying lens and light source held near the skin. If an area is suspicious, the physician will take a biopsy, removing all or part of the abnormal area for examination by a pathologist. At Roswell Park, our dermatopathologists — pathologists who specialize in skin cancers — conduct the laboratory examination and testing of the tissue. The biopsy is usually a minor procedure that includes numbing the area to be tested.
If the diagnosis is melanoma or certain types of squamous cell carcinoma, which have a risk of spreading, additional testing may be required to learn whether the cancer has grown deeper in the skin or has spread to lymph nodes or other parts of the body. | no |
Hematology | Can routine blood tests detect cancer? | no_statement | "routine" "blood" "tests" cannot "detect" "cancer".. "cancer" cannot be "detected" through "routine" "blood" "tests". | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6162229/ | Routine blood investigations have limited utility in surveillance of ... | Share
RESOURCES
As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with,
the contents by NLM or the National Institutes of Health.
Learn more:
PMC Disclaimer
|
PMC Copyright Notice
Note:This work is published under the standard license to publish
agreement. After 12 months the work will become freely available and the license
terms will switch to a Creative Commons Attribution 4.0 International (CC BY
4.0).
Methods
We conducted a multi-centre retrospective analysis of all patients
diagnosed with aggressive lymphoma treated with curative-intent chemotherapy who
achieved CR for at least 3 months between 2000 and 2015. An abnormal blood test
was defined as any new and unexplained abnormality for full blood examination,
lactate dehydrogenase or erythrocyte sedimentation rate.
Results
Three hundred and forty-six patients attended a total of 3084
outpatient visits; blood tests were performed at 90% of these appointments.
Fifty-six (16%) patients relapsed. Routine laboratory testing detected relapse in
only three patients (5% of relapses); in the remaining patients, relapse was
suspected clinically (80%) or detected by imaging (15%). The sensitivity of all
blood tests was 42% and the positive predictive value was 9%. No significant
difference in survival was shown in patients who underwent a routine blood test
within 3 months prior to relapse versus those who did not (p = 0.88).
Conclusions
Routine blood tests demonstrate unacceptably poor performance
characteristics, have no impact on survival and thus have limited value in the
detection of relapse in routine surveillance.
Subject terms: Hodgkin lymphoma, Non-hodgkin lymphoma, Lymphoma
Introduction
While the majority of patients with aggressive lymphomas achieve
complete remission (CR) with anthracycline-based combination chemotherapy, up to 50%
of patients will relapse.1–3 As a significant proportion of patients who relapse
are considered for salvage chemotherapy and curative-intent autologous stem cell
transplant, surveillance after first-line therapy is
recommended.4
In patients achieving CR, the optimal frequency, duration and type of
surveillance are not established. As follow-up imaging is associated with increased
radiation-related risk and minimal benefit in asymptomatic patients, such
surveillance is no longer routine.5–7 Regular laboratory testing (Labs)
still features in internationally recognised surveillance guidelines, despite
limited evidence for their use in detecting relapse.7–10 Studies conducted prior to
modern treatment response assessments and routine rituximab administration suggested
that lactate dehydrogenase (LDH) and erythrocyte sedimentation rate (ESR) may be
useful as surveillance tools, and that, more recently, the absolute lymphocyte count
(ALC) and lymphocyte–monocyte ratio (LMR) have shown promise in small
series.11–13
However, large-scale data are lacking, particularly in the era of positron emission
tomography (PET)-defined complete metabolic response (CMR).
Clinically significant scan-related anxiety has been established in
both lymphoma and solid malignancies14,15;
this is reported in up to 80% of patients and does not abate over time. It is likely
that blood tests have similar consequences. In addition, routine laboratory
investigations have cost implications and are potentially falsely reassuring if
normal. Abnormal results are also associated with the potential for expensive,
unnecessary additional investigations.
To evaluate the role of routine blood testing in follow-up of patients
with aggressive lymphoma, we analysed the use of blood tests in patients with
high-grade lymphomas undergoing surveillance after achieving CMR from
curative-intent combination chemotherapy at three large Australian cancer centres.
In particular, we examined the utility of routine tests for the detection of relapse
in the absence of clinical symptoms or signs, and whether performing such tests was
associated with significant differences in post-relapse survival.
Methods
Patients
Patients were identified from an electronic database at three
institutions. Eligible patients were aged 16 years or older, with a documented
histological diagnosis of diffuse large B cell lymphoma (DLBCL), Hodgkin's
lymphoma (HL), T cell lymphoma (TCL) or Burkitt lymphoma (BL) who received
curative-intent first-line treatment and in documented CR on PET/CT for at least 3
months after completion of therapy. Those with primary progressive lymphoma, in
partial remission (PR) at the end of first-line treatment, primary central nervous
system lymphoma, HIV-associated lymphoma and transformation from indolent subtypes
were excluded from the analysis.
All information was obtained from electronic patient records. Data
were collected on gender, age, disease stage, comorbidities, presence of B
symptoms, Eastern Cooperative Oncology Group performance status, extranodal sites
of disease, prognostic score and first-line chemotherapy treatment. Details of
each outpatient appointment were recorded, including pathology results, presence
of relevant symptoms and/or clinical signs (the absence of both was deemed
‘asymptomatic’), whether the visit was scheduled or unplanned, and outcomes
including routine subsequent visit, earlier planned review and results of
additional investigations ordered. Relapse date, site and method of diagnosis, any
further treatment and date of death or last follow-up were also documented.
Patient follow-up at all three institutions was according to
institutional guidelines as follows: 3-monthly for the first 2 years after
completion of therapy, and then every 6 to 12 months for the following 3 years for
at least 5 years in total. Blood tests were recommended but performed at the
treating physician’s discretion. Imaging was also performed according to
the treating physician’s discretion but removed from the institutional guidelines
in 2014.
Statistical analysis
The primary endpoints of the study were to assess whether full
blood examination (FBE: haemoglobin, white cell count and platelet count), LDH,
ESR, ALC, absolute monocyte count (AMC) and LMR during follow-up are reliable
markers to predict relapse. Secondary endpoints include methods of relapse
detection, event-free survival (EFS) and overall survival (OS). EFS was defined as
the period from the date of diagnosis until relapse, disease progression or death
from any cause. OS was measured from the date of diagnosis until death from any
cause.
Laboratory results were considered abnormal if all of the following
were fulfilled: (a) any component of FBE, LDH or ESR fell outside local laboratory
normal limits, (b) the derangement was not present previously and (c) could not be
explained by a concurrent medical condition. Abnormal laboratory results were
investigated at clinician discretion. Laboratory results were evaluated based on
their independent ability to detect relapse within 3 months of confirmation.
Sensitivity, specificity, negative predictive value (NPV) and positive predictive
value (PPV) were derived from 2 × 2 contingency tables and 95% confidence
intervals (CIs) were determined exactly.
In addition, receiver operating characteristics (ROC) and area
under the curve (AUC) analysis were undertaken to determine the utility of ALC,
AMC and LMR as a marker for relapse. AMC and ALC were evaluated as continuous
variables, and LMR was calculated by dividing the ALC by the AMC. Survival
analysis was performed using the Kaplan–Meier method and compared by the log-rank
test between different groups. All values were two-sided and statistical
significance was accepted at p < 0.05. The
study was approved by the local institutional review boards (LR117/2015).
Results
Between January 2000 and January 2015, 346 eligible patients
underwent 3048 outpatient visits. The median follow-up from CR1 was 30 months (range
3–184). Baseline demographics are detailed in Table 1. Laboratory investigations were performed at 2746 visits (90%),
with FBE being the most common test ordered (Table 2). LDH was predominantly performed in non-Hodgkin's lymphoma (NHL)
and ESR in HL.
Relapse of lymphoma occurred in 56/346 (16%) patients (33 DLBCL, 19
HL, 4 other). The median age at relapse was 64.3 years (range 18–91), and 51% were
over 60 years of age. Forty-three out of 56 (77%) had advanced stage disease and
18/56 (32%) were at high risk (as defined in Table 1). Only one patient (high-risk HL) received an abbreviated
chemotherapy course; the remaining 45 patients received a full course of standard
treatment. The median duration from treatment completion until relapse was 14 months
(range 3–84 months), with 48% of relapses occurring in the first year, 31% in the
second year and the remainder (21%) occurring up to 7 years after the end of
treatment.
Relapse was diagnosed by routine laboratory investigations in 3/56
(5%) and routine imaging in 10/56 (18%) patients. Clinical symptoms/signs lead to
diagnosis of relapse in 43/56 (80%; 40 with symptoms, 3 with signs only); 19 of
which were detected at unscheduled visits. Unscheduled appointments due to
patient-reported symptoms (3% of all visits) showed a significantly stronger
association with relapse than scheduled visits (odds ratio 50.4, p < 0.001).
Abnormal laboratory results were recorded at 404/3048 follow-up
visits: 304 in asymptomatic and 100 in symptomatic patients.
Asymptomatic patients
An unexplained abnormal result prompted a change in management at
46/304 (15%) visits in asymptomatic patients: 19/46 (41%) had repeat interval
laboratory investigations only, 13/46 (28%) underwent additional imaging, 10/46
(22%) were booked for an earlier future review with repeat labs and 4/46 (9%) had
biopsies in addition to imaging. The specific laboratory abnormalities and
associated changes in management in asymptomatic patients at scheduled
appointments are described in Table 3.
Almost all elevations in LDH and ESR were <2 times the upper limit of normal
(ULN), and leukopaenia was the most common FBE abnormality (12/29; 41%) resulting
in change in management.
aIncluding cases where relapse was
diagnosed only after symptoms developed
Relapse was diagnosed by 3/304 (1%) abnormal results in
asymptomatic patients; one TCL with neutropaenia and thrombocytopaenia, and two HL
patients; one with elevated LDH and one with elevated ESR. No relapses in NHL were
diagnosed on the basis of an abnormal LDH alone.
In five additional patients, relapse was detected within 3 months
of an abnormal result; however, in these cases, suspicion of relapse arose only
after the patient developed symptoms. The abnormalities were: lymphopaenia,
elevated LDH and both elevated LDH and abnormal FBE in three patients.
Symptomatic patients
In contrast, 67/100 (67%) of symptomatic patients with an abnormal
result underwent a change in their management. The most common changes were
further imaging (n = 35; 52%) and imaging with
biopsy (n = 17; 25%), followed by earlier
future review and labs (n = 7; 10%) and repeat
interval laboratory investigations only (n = 8;12%).
The sensitivity, specificity, PPV and NPV of all routine lab tests
(FBE, LDH and ESR combined) in detecting relapse was 42%, 87%, 9% and 98%,
respectively. Performance characteristics of individual lab tests are detailed in
Table 4. The PPV of LDH remained the
same even in the subset of 115 NHL patients with elevated baseline LDH (8%,
confidence interval (CI), 5–12). In the 43 HL patients with an elevated baseline
ESR, the PPV of ESR was even lower (6.5%, CI, 2–17).
ROC and AUC analysis showed that ALC, AMC and LMR at each
appointment (n = 2660) were all very poor
markers for relapse (AUC = 0.517, 0.529 and 0.577 respectively); thus, their
performance characteristics were not calculated.
Two-year OS and EFS were 76% (95% CI, 71–80) and 70% (95% CI,
65–80), respectively, in the whole cohort. There was no significant difference in
post-relapse survival between patients who had laboratory investigations performed
≤3 months prior to documented relapse versus patients who did not (p = 0.88, Fig. 1).
Survival curve in relapsed patients. This shows the post-relapse
survival based on whether a blood test was performed in preclinical period
(3 months) for relapse
Discussion
This analysis, from one of the largest and most comprehensive series
in the modern era to our knowledge, demonstrates little benefit of including routine
laboratory testing to detect relapse in follow-up of asymptomatic patients with
aggressive lymphoma achieving metabolic CR after first-line chemotherapy. In line
with published reports, our results confirm that clinical symptoms and signs are the
single most important predictor of relapse,16,17 with 80% of relapsed patients having symptoms at
presentation and only 1% of isolated abnormal blood test results leading to a
diagnosis of relapse. There was no difference in survival between patients who had
blood tests and those who did not.
Previous studies have reported that routine blood tests do not
reliably predict relapse.17–19 However, all have assessed either only one
parameter or ‘blood tests’ as a whole without describing which tests were performed
or omitted. Our study is the only one to assess the performance of individual tests,
their role in the detection of relapse and their impact on management and overall
outcomes in a population with PET-confirmed CR at 3 months.
ESR had been proposed as a useful marker of relapse in HL in
199120,
but subsequent studies dispute this, with the vast majority of relapses detected by
clinical findings rather than by ESR alone.21,22 Nevertheless, ESR is still frequently performed
during follow-up. In our cohort, ESR had a sensitivity of only 39% for detection of
relapse in HL, and only one relapse was diagnosed by an isolated elevated
ESR.
LDH has also been proposed as a useful screening test for relapse in
DLBCL in the pre-rituximab era23, but recent studies are consistent with ours in
showing its lack of predictive value in the absence of symptoms or signs suggesting
relapse.13,18,24,25 The PPV of an elevated LDH in our aggressive NHL
cohort was 8%, even after accounting for known causes of LDH elevation such as liver
disease and infection. No relapses in NHL were detected on the basis of LDH alone.
Our findings confirm results from a previous smaller series of 100 DLBCL
patients,19 which analysed LDH at every appointment and
reported a low PPV of 9% and sensitivity of 47% for relapse. Interestingly, LDH was
ordered at 69% of HL follow-up appointments, despite a lack of evidence or
recommendations by guidelines for its use in monitoring this subtype, and lead to
the detection of one HL relapse.
FBE was the most commonly performed test in our study, with an
abnormal result in 10% of samples, yet was associated with a change in management in
<15% of the time. There was one relapse diagnosed on the basis of FBE alone.
These findings are also consistent with the literature, with several studies
reporting no relapses detected by FBE abnormalities.21–23
Baseline lymphocyte and monocyte counts and the LMR have prognostic
value for both DLBCL26 and HL27 and three retrospective studies concluded that a
low ALC and LMR during follow-up is a useful indicator of relapse in DLBCL. PPV and
NPV in these studies ranged between 68–74% and 49–96%, respectively, with
sensitivity 68–89% and specificity 88%.11–13 However, these studies analysed
parameters at a single time point just prior to relapse without accounting for
symptomatology or confirming initial CR on PET. In contrast, our analysis
demonstrated that ALC, AMC and LMR had almost no ability to discriminate between
relapsed and non-relapsed patients, with far lower AUC values than previously
reported (0.52 versus 0.91 for ALC).13
It may be argued that the NPV of laboratory tests was high in our
study (98%) and provides reassurance to patients with normal results. Conversely,
15% of blood tests had an unexplained abnormality; not only are they of poor PPV in
asymptomatic patients, they almost always result in unnecessary patient anxiety and
often lead to further investigations, which are seldom abnormal. Routine blood tests
have been postulated as a method of monitoring for therapy-related myelodysplastic
syndromes (MDS). The incidence of therapy-related MDS in patients receiving
induction chemotherapy for aggressive lymphomas is only marginally higher than the
general population (0.4–1.2% post treatment versus 0.3% in the general
population28–30). More importantly, there is limited evidence
for early detection of MDS in asymptomatic patients and current guidelines do not
recommend treatment for the majority of this cohort. Additionally, screening for MDS
would, at most, warrant a FBE alone, but not other currently recommended blood tests
in lymphoma surveillance guidelines.
Recognising this study is retrospective, and the design remains
robust. The patients were treated uniformly, as demonstrated by the high percentage
of patients undergoing the individual blood tests, consistent use of end of
treatment PET to confirm metabolic remission and limited variation in treatment
regimens. Unlike the majority of prior analyses,17–19 this study reviewed all labs
performed for the duration of follow-up in patients with PET-confirmed CR for at
least 3 months following treatment. Our study included all major histological
subtypes of aggressive lymphoma and is likely relevant to a wider population.
Although of note, the exclusion of primary refractory disease in our cohort to
accurately analyse the role of blood tests in detection of relapse led to a lower
proportion of high-risk patients than many published series.
This study confirms that common blood tests do not reliably detect
relapse of aggressive lymphoma in asymptomatic patients treated in the modern era
and should not be recommended by current international guidelines. They are no
longer performed in this context in our institutions. More novel methods of relapse
detection such as circulating tumour DNA have demonstrated greater specificity and
sensitivity than standard blood parameters; however, this technology is yet to be
widely available and affordable.
Author contributions
E.A.H. and A.G. designed the study, Z.L. and O.E. performed the research
and F.J.H. and M.G. contributed additional data. Z.L. analysed the data, Z.L. and
E.A.H. wrote the paper and A.G. and G.C. critically revised the paper. All authors
approved the final version.
Ethics approval and consent to participate
This study was approved by the local institutional review boards.
Austin Health Human Research Ethics Committee, reference number LNR/15/AUSTIN/432.
Eastern Health Human Research Ethics Committee, reference number LR117/2015
Availability of data and material
Materials, data and associated protocols are available from the
corresponding author on reasonable request. | Fifty-six (16%) patients relapsed. Routine laboratory testing detected relapse in
only three patients (5% of relapses); in the remaining patients, relapse was
suspected clinically (80%) or detected by imaging (15%). The sensitivity of all
blood tests was 42% and the positive predictive value was 9%. No significant
difference in survival was shown in patients who underwent a routine blood test
within 3 months prior to relapse versus those who did not (p = 0.88).
Conclusions
Routine blood tests demonstrate unacceptably poor performance
characteristics, have no impact on survival and thus have limited value in the
detection of relapse in routine surveillance.
Subject terms: Hodgkin lymphoma, Non-hodgkin lymphoma, Lymphoma
Introduction
While the majority of patients with aggressive lymphomas achieve
complete remission (CR) with anthracycline-based combination chemotherapy, up to 50%
of patients will relapse.1–3 As a significant proportion of patients who relapse
are considered for salvage chemotherapy and curative-intent autologous stem cell
transplant, surveillance after first-line therapy is
recommended.4
In patients achieving CR, the optimal frequency, duration and type of
surveillance are not established. As follow-up imaging is associated with increased
radiation-related risk and minimal benefit in asymptomatic patients, such
surveillance is no longer routine.5–7 Regular laboratory testing (Labs)
still features in internationally recognised surveillance guidelines, despite
limited evidence for their use in detecting relapse.7–10 Studies conducted prior to
modern treatment response assessments and routine rituximab administration suggested
that lactate dehydrogenase (LDH) and erythrocyte sedimentation rate (ESR) may be
useful as surveillance tools, and | no |
Rheumatoid | Can smoking cause Rheumatoid Arthritis? | yes_statement | "smoking" can "cause" rheumatoid arthritis.. the act of "smoking" can lead to the development of rheumatoid arthritis. | https://www.cdc.gov/arthritis/basics/rheumatoid-arthritis.html | Rheumatoid Arthritis (RA) | Arthritis | CDC | What is rheumatoid arthritis (RA)?
Rheumatoid arthritis, or RA, is an autoimmune and inflammatory disease, which means that your immune system attacks healthy cells in your body by mistake, causing inflammation (painful swelling) in the affected parts of the body.
RA mainly attacks the joints, usually many joints at once. RA commonly affects joints in the hands, wrists, and knees. In a joint with RA, the lining of the joint becomes inflamed, causing damage to joint tissue. This tissue damage can cause long-lasting or chronic pain, unsteadiness (lack of balance), and deformity (misshapenness).
RA can also affect other tissues throughout the body and cause problems in organs such as the lungs, heart, and eyes.
What causes RA?
RA is the result of an immune response in which the body’s immune system attacks its own healthy cells. The specific causes of RA are unknown, but some factors can increase the risk of developing the disease.
What are the risk factors for RA?
Researchers have studied a number of genetic and environmental factors to determine if they change person’s risk of developing RA.
Characteristics that increase risk
Age. RA can begin at any age, but the likelihood increases with age. The onset of RA is highest among adults in their sixties.
Sex. New cases of RA are typically two-to-three times higher in women than men.
Genetics/inherited traits. People born with specific genes are more likely to develop RA. These genes, called HLA (human leukocyte antigen) class II genotypes, can also make your arthritis worse. The risk of RA may be highest when people with these genes are exposed to environmental factors like smoking or when a person is obese.
Smoking. Multiple studies show that cigarette smoking increases a person’s risk of developing RA and can make the disease worse.
History of live births. Women who have never given birth may be at greater risk of developing RA.
Early Life Exposures. Some early life exposures may increase risk of developing RA in adulthood. For example, one study found that children whose mothers smoked had double the risk of developing RA as adults. Children of lower income parents are at increased risk of developing RA as adults.
Obesity. Being obese can increase the risk of developing RA. Studies examining the role of obesity also found that the more overweight a person was, the higher his or her risk of developing RA became.
Characteristics that can decrease risk
Unlike the risk factors above which may increase risk of developing RA, at least one characteristic may decrease risk of developing RA.
Breastfeeding. Women who have breastfed their infants have a decreased risk of developing RA.
How is RA diagnosed?
RA is diagnosed by reviewing symptoms, conducting a physical examination, and doing X-rays and lab tests. It’s best to diagnose RA early—within 6 months of the onset of symptoms—so that people with the disease can begin treatment to slow or stop disease progression (for example, damage to joints). Diagnosis and effective treatments, particularly treatment to suppress or control inflammation, can help reduce the damaging effects of RA.
Who should diagnose and treat RA?
A doctor or a team of doctors who specialize in care of RA patients should diagnose and treat RA. This is especially important because the signs and symptoms of RA are not specific and can look like signs and symptoms of other inflammatory joint diseases. Doctors who specialize in arthritis are called rheumatologists, and they can make the correct diagnosis. To find a provider near you, visit the database of rheumatologistsexternal icon on the American College of Rheumatology (ACR) website.
How is RA treated?
RA can be effectively treated and managed with medication(s) and self-management strategies. Treatment for RA usually includes the use of medications that slow disease and prevent joint deformity, called disease-modifying antirheumatic drugs (DMARDs); biological response modifiers (biologicals) are medications that are an effective second-line treatment. In addition to medications, people can manage their RA with self-management strategies proven to reduce pain and disability, allowing them to pursue the activities important to them. People with RA can relieve pain and improve joint function by learning to use five simple and effective arthritis management strategies.
What are the complications of RA?
Rheumatoid arthritis (RA) has many physical and social consequences and can lower quality of life. It can cause pain, disability, and premature death.
Premature heart disease. People with RA are also at a higher risk for developing other chronic diseases such as heart disease and diabetes. To prevent people with RA from developing heart disease, treatment of RA also focuses on reducing heart disease risk factors. For example, doctors will advise patients with RA to stop smoking and lose weight.
Obesity. People with RA who are obese have an increased risk of developing heart disease risk factors such as high blood pressure and high cholesterol. Being obese also increases risk of developing chronic conditions such as heart disease and diabetes. Finally, people with RA who are obese experience fewer benefits from their medical treatment compared with those with RA who are not obese.
Employment. RA can make work difficult. Adults with RA are less likely to be employed than those who do not have RA. As the disease gets worse, many people with RA find they cannot do as much as they used to. Work loss among people with RA is highest among people whose jobs are physically demanding. Work loss is lower among those in jobs with few physical demands, or in jobs where they have influence over the job pace and activities.
How can I manage RA and improve my quality of life?
RA affects many aspects of daily living including work, leisure and social activities. Fortunately, there are multiple low-cost strategies in the community that are proven to increase quality of life.
Get physically active. Experts recommend that ideally adults be moderately physically active for 150 minutes per week, like walking, swimming, or biking 30 minutes a day for five days a week. You can break these 30 minutes into three separate ten-minute sessions during the day. Regular physical activity can also reduce the risk of developing other chronic diseases such as heart disease, diabetes, and depression. Learn more about physical activity for arthritis.
Go to effective physical activity programs. If you are worried about making arthritis worse or unsure how to safely exercise, participation in physical activity programs can help reduce pain and disability related to RA and improve mood and the ability to move. Classes take place at local Ys, parks, and community centers. These classes can help people with RA feel better. Learn more about the proven physical activity programs that CDC recommends.
Join a self-management education class. Participants with arthritis and (including RA) gain confidence in learning how to control their symptoms, how to live well with arthritis, and how arthritis affects their lives. Learn more about the proven self-management education programs that CDC recommends.
Stop Smoking. Cigarette smoking makes the disease worse and can cause other medical problems. Smoking can also make it more difficult to stay physically active, which is an important part of managing RA. Get help to stop smoking by visiting I’m Ready to Quit on CDC’s Tips From Former Smokers website.
Maintain a Healthy Weight. Obesity can cause numerous problems for people with RA and so it’s important to maintain a healthy weight. For more information, visit the CDC Healthy Weight website. | What is rheumatoid arthritis (RA)?
Rheumatoid arthritis, or RA, is an autoimmune and inflammatory disease, which means that your immune system attacks healthy cells in your body by mistake, causing inflammation (painful swelling) in the affected parts of the body.
RA mainly attacks the joints, usually many joints at once. RA commonly affects joints in the hands, wrists, and knees. In a joint with RA, the lining of the joint becomes inflamed, causing damage to joint tissue. This tissue damage can cause long-lasting or chronic pain, unsteadiness (lack of balance), and deformity (misshapenness).
RA can also affect other tissues throughout the body and cause problems in organs such as the lungs, heart, and eyes.
What causes RA?
RA is the result of an immune response in which the body’s immune system attacks its own healthy cells. The specific causes of RA are unknown, but some factors can increase the risk of developing the disease.
What are the risk factors for RA?
Researchers have studied a number of genetic and environmental factors to determine if they change person’s risk of developing RA.
Characteristics that increase risk
Age. RA can begin at any age, but the likelihood increases with age. The onset of RA is highest among adults in their sixties.
Sex. New cases of RA are typically two-to-three times higher in women than men.
Genetics/inherited traits. People born with specific genes are more likely to develop RA. These genes, called HLA (human leukocyte antigen) class II genotypes, can also make your arthritis worse. The risk of RA may be highest when people with these genes are exposed to environmental factors like smoking or when a person is obese.
Smoking. Multiple studies show that cigarette smoking increases a person’s risk of developing RA and can make the disease worse.
History of live births. Women who have never given birth may be at greater risk of developing RA.
Early Life Exposures. Some early life exposures may increase risk of developing RA in adulthood. | yes |
Rheumatoid | Can smoking cause Rheumatoid Arthritis? | yes_statement | "smoking" can "cause" rheumatoid arthritis.. the act of "smoking" can lead to the development of rheumatoid arthritis. | https://my.clevelandclinic.org/health/diseases/4924-rheumatoid-arthritis | Rheumatoid Arthritis (RA): Causes, Symptoms & Treatment FAQs | Rheumatoid Arthritis
Rheumatoid arthritis is a type of arthritis where your immune system attacks the tissue lining the joints on both sides of your body. It may affect other parts of your body too. The exact cause is unknown. Treatment options include lifestyle changes, physical therapy, occupational therapy, nutritional therapy, medication and surgery.
Overview
Rheumatoid arthritis is an autoimmune disease that causes symptoms in several body systems.
What is rheumatoid arthritis?
Rheumatoid arthritis (RA) is an autoimmune disease that is chronic (ongoing). It occurs in the joints on both sides of your body, which makes it different from other types of arthritis. You may have symptoms of pain and inflammation in your:
Fingers.
Hands.
Wrists
Knees
Ankles.
Feet.
Toes.
Uncontrolled inflammation damages cartilage, which normally acts as a “shock absorber” in your joints. In time, this can deform your joints. Eventually, your bone itself erodes. This can lead to the fusion of your joint (an effort of your body to protect itself from constant irritation).
Specific cells in your immune system (your body’s infection-fighting system) aid this process. These substances are produced in your joints but also circulate and cause symptoms throughout your body. In addition to affecting your joints, rheumatoid arthritis sometimes affects other parts of your body, including your:
Skin.
Eyes.
Mouth.
Lungs.
Heart.
Who gets rheumatoid arthritis?
Rheumatoid arthritis affects more than 1.3 million people in the United States. It’s 2.5 times more common in people designated female at birth than in people designated male at birth.
What’s the age of onset for rheumatoid arthritis?
RA usually starts to develop between the ages of 30 and 60. But anyone can develop rheumatoid arthritis. In children and young adults — usually between the ages of 16 and 40 — it’s called young-onset rheumatoid arthritis (YORA). In people who develop symptoms after they turn 60, it’s called later-onset rheumatoid arthritis (LORA).
Symptoms and Causes
What are the symptoms of rheumatoid arthritis?
Rheumatoid arthritis affects everyone differently. In some people, joint symptoms develop over several years. In other people, rheumatoid arthritis symptoms progress rapidly. Many people have time with symptoms (flares) and then time with no symptoms (remission).
Does rheumatoid arthritis cause fatigue?
Everyone’s experience of rheumatoid arthritis is a little different. But many people with RA say that fatigue is among the worst symptoms of the disease.
Living with chronic pain can be exhausting. And fatigue can make it more difficult to manage your pain. It’s important to pay attention to your body and take breaks before you get too tired.
What are rheumatoid arthritis flare symptoms?
The symptoms of a rheumatoid arthritis flare aren’t much different from the symptoms of rheumatoid arthritis. But people with RA have ups and downs. A flare is a time when you have significant symptoms after feeling better for a while. With treatment, you’ll likely have periods of time when you feel better. Then, stress, changes in weather, certain foods or infections trigger a period of increased disease activity.
Although you can’t prevent flares altogether, there are steps you can take to help you manage them. It might help to write your symptoms down every day in a journal, along with what’s going on in your life. Share this journal with your rheumatologist, who may help you identify triggers. Then you can work to manage those triggers.
What causes rheumatoid arthritis?
The exact cause of rheumatoid arthritis is unknown. Researchers think it’s caused by a combination of genetics, hormones and environmental factors.
Normally, your immune system protects your body from disease. With rheumatoid arthritis, something triggers your immune system to attack your joints. An infection, smoking or physical or emotional stress may be triggering.
Is rheumatoid arthritis genetic?
Scientists have studied many genes as potential risk factors for RA. Certain genetic variations and non-genetic factors contribute to your risk of developing rheumatoid arthritis. Non-genetic factors include sex and exposure to irritants and pollutants.
People born with variations in the human leukocyte antigen (HLA) genes are more likely to develop rheumatoid arthritis. HLA genes help your immune system tell the difference between proteins your body makes and proteins from invaders like viruses and bacteria.
What are the risk factors for developing rheumatoid arthritis?
There are several risk factors for developing rheumatoid arthritis. These include:
Family history: You’re more likely to develop RA if you have a close relative who also has it.
Sex: Women and people designated female at birth are two to three times more likely to develop rheumatoid arthritis.
Smoking:Smoking increases a person’s risk of rheumatoid arthritis and makes the disease worse.
Obesity: Your chances of developing RA are higher if you have obesity.
Diagnosis and Tests
How is rheumatoid arthritis diagnosed?
Your healthcare provider may refer you to a physician who specializes in arthritis (rheumatologist). Rheumatologists diagnose people with rheumatoid arthritis based on a combination of several factors. They’ll do a physical exam and ask you about your medical history and symptoms. Your rheumatologist will order blood tests and imaging tests.
The blood tests look for inflammation and blood proteins (antibodies) that are signs of rheumatoid arthritis. These may include:
About 60% to 70% of people living with rheumatoid arthritis have antibodies to cyclic citrullinated peptides (CCP) (proteins).
Your rheumatologist may order imaging tests to look for signs that your joints are wearing away. Rheumatoid arthritis can cause the ends of the bones within your joints to wear down. The imaging tests may include:
In some cases, your provider may watch how you do over time before making a definitive diagnosis of rheumatoid arthritis.
What are the diagnostic criteria for rheumatoid arthritis?
Diagnostic criteria are a set of signs, symptoms and test results your provider looks for before telling you that you’ve got rheumatoid arthritis. They’re based on years of research and clinical practice. Some people with RA don’t have all the criteria. Generally, though, the diagnostic criteria for rheumatoid arthritis include:
Inflammatory arthritis in two or more large joints (shoulders, elbows, hips, knees and ankles).
Management and Treatment
What are the goals of treating rheumatoid arthritis?
The most important goal of treating rheumatoid arthritis is to reduce joint pain and swelling. Doing so should help maintain or improve joint function. The long-term goal of treatment is to slow or stop joint damage. Controlling joint inflammation reduces your pain and improves your quality of life.
How is rheumatoid arthritis treated?
Joint damage generally occurs within the first two years of diagnosis, so it’s important to see your provider if you notice symptoms. Treating rheumatoid arthritis in this “window of opportunity” can help prevent long-term consequences.
Treatments for rheumatoid arthritis include lifestyle changes, therapies, medicine and surgery. Your provider considers your age, health, medical history and how bad your symptoms are when deciding on a treatment.
What medications treat rheumatoid arthritis?
Early treatment with certain drugs can improve your long-term outcome. Combinations of drugs may be more effective than, and appear to be as safe as, single-drug therapy.
There are many medications to decrease joint pain, swelling and inflammation, and to prevent or slow down the disease. Medications that treat rheumatoid arthritis include:
Non-steroidal anti-inflammatory drugs (NSAIDs)
COX-2 inhibitors are another kind of NSAID. They include products like celecoxib (Celebrex®). COX-2 inhibitors have fewer bleeding side effects on your stomach than typical NSAIDs.
Corticosteroids
Corticosteroids, also known as steroids, also can help with pain and inflammation. They include prednisone and cortisone.
Disease-modifying antirheumatic drugs (DMARDs)
Unlike other NSAIDs, DMARDs actually can slow the disease process by modifying your immune system. Your provider may prescribe DMARDs alone and in combination with steroids or other drugs. Common DMARDs include:
Methotrexate (Trexall®).
Hydroxychloroquine (Plaquenil®).
Sulfasalazine (Azulfidine®).
Leflunomide (Arava®).
Janus kinase (JAK) inhibitors
JAK inhibitors are another type of DMARD. Rheumatologists often prescribe JAK inhibitors for people who don’t improve taking methotrexate alone. These products include:
Biologics
If you don’t respond well to DMARDs, your provider may prescribe biologic response agents (biologics). Biologics target the molecules that cause inflammation in your joints. Providers think biologics are more effective because they attack the cells at a more specific level. These products include:
Biologics tend to work rapidly — within two to six weeks. Your provider may prescribe them alone or in combination with a DMARD like methotrexate.
What is the safest drug for rheumatoid arthritis?
The safest drug for rheumatoid arthritis is one that gives you the most benefit with the least amount of negative side effects. This varies depending on your health history and the severity of your RA symptoms. Your healthcare provider will work with you to develop a treatment program. The drugs your healthcare provider prescribes will match the seriousness of your condition.
It’s important to meet with your healthcare provider regularly. They’ll watch for any side effects and change your treatment, if necessary. Your healthcare provider may order tests to determine how effective your treatment is and if you have any side effects.
Will changing my diet help my rheumatoid arthritis?
When combined with the treatments and medications your provider recommends, changes in diet may help reduce inflammation and other symptoms of RA. But it won’t cure you. You can talk with your doctor about adding good fats and minimizing bad fats, salt and processed carbohydrates. No herbal or nutritional supplements, like collagen, can cure rheumatoid arthritis. These dietary changes are safer and most successful when monitored by your rheumatologist.
But there are lifestyle changes you can make that may help relieve your symptoms. Your rheumatologist may recommend weight loss to reduce stress on inflamed joints.
People with rheumatoid arthritis also have a higher risk of coronary artery disease. High blood cholesterol (a risk factor for coronary artery disease) can respond to changes in diet. A nutritionist can recommend specific foods to eat or avoid to reach a desirable cholesterol level.
When is surgery used to treat rheumatoid arthritis?
Surgery may be an option to restore function to severely damaged joints. Your provider may also recommend surgery if your pain isn’t controlled with medication. Surgeries that treat RA include:
Outlook / Prognosis
What is the prognosis (outlook) for people who have rheumatoid arthritis?
Although there’s no cure for rheumatoid arthritis, there are many effective methods for decreasing your pain and inflammation and slowing down your disease process. Early diagnosis and effective treatment are very important.
What types of lifestyle changes can help with rheumatoid arthritis?
Having a lifelong illness like rheumatoid arthritis may make you feel like you don’t have much control over your quality of life. While there are aspects of RA that you can’t control, there are things you can do to help you feel the best that you can.
Such lifestyle changes include:
Rest
When your joints are inflamed, the risk of injury to your joints and nearby soft tissue structures (such as tendons and ligaments) is high. This is why you need to rest your inflamed joints. But it’s still important for you to exercise. Maintaining a good range of motion in your joints and good fitness overall are important in coping with RA.
Exercise
Pain and stiffness can slow you down. Some people with rheumatoid arthritis become inactive. But inactivity can lead to a loss of joint motion and loss of muscle strength. These, in turn, decrease joint stability and increase pain and fatigue.
Regular exercise can help prevent and reverse these effects. You might want to start by seeing a physical or occupational therapist for advice about how to exercise safely. Beneficial workouts include:
Range-of-motion exercises to preserve and restore joint motion.
Exercises to increase strength.
Exercises to increase endurance (walking, swimming and cycling).
Frequently Asked Questions
What are the early signs of rheumatoid arthritis?
Early signs of rheumatoid arthritis include tenderness or pain in small joints like those in your fingers or toes. Or you might notice pain in a larger joint like your knee or shoulder. These early signs of RA are like an alarm clock set to vibrate. It might not always been enough to get your attention. But the early signs are important because the sooner you’re diagnosed with RA, the sooner your treatment can begin. And prompt treatment may mean you are less likely to have permanent, painful joint damage.
What is early stage rheumatoid arthritis?
Providers sometimes use the term “early rheumatoid arthritis” to describe the condition in people who’ve had symptoms of rheumatoid arthritis for fewer than six months.
What are the four stages of rheumatoid arthritis?
Stage 1: In early stage rheumatoid arthritis, the tissue around your joint(s) is inflamed. You may have some pain and stiffness. If your provider ordered X-rays, they wouldn’t see destructive changes in your bones.
Stage 2: The inflammation has begun to damage the cartilage in your joints. You might notice stiffness and a decreased range of motion.
Stage 3: The inflammation is so severe that it damages your bones. You’ll have more pain, stiffness and even less range of motion than in stage 2, and you may start to see physical changes.
What’s the normal sed rate for rheumatoid arthritis?
Sed rate (erythrocyte sedimentation rate, also known as ESR) is a blood test that helps detect inflammation in your body. Your healthcare provider may also use this test to watch how your RA progresses. Normal sed rates are as follows:
People designated male at birth
Erythrocyte sedimentation rate
< 50 years old
≤ 15 mm/hr
> 50 years old
≤ 20 mm/hr
People designated female at birth
< 50 years old
≤ 20 mm/hr
> 50 years old
≤ 30 mm/hr
In rheumatoid arthritis, your sed rate is likely higher than normal. To take part in clinical trials related to rheumatoid arthritis, you usually need an ESR of ≥ 28 mm/hr. With treatment, your sed rate may decrease. If you reach the normal ranges listed above, you may be in remission.
What is the difference?
Rheumatoid arthritis vs. osteoarthritis
Rheumatoid arthritis and osteoarthritis are both common causes of pain and stiffness in joints. But they have different causes. In osteoarthritis, inflammation and injury break down your cartilage over time. In rheumatoid arthritis, your immune system attacks the lining of your joints.
Is rheumatoid arthritis a disability?
The Americans with Disabilities Act (ADA) says that a disability is a physical or mental impairment that limits one or more major life activity. If RA impacts your ability to function, you may qualify for disability benefits from the Social Security Administration.
Can rheumatoid arthritis go away?
No, rheumatoid arthritis doesn’t go away. It’s a condition you’ll have for the rest of your life. But you may have periods where you don’t notice symptoms. These times of feeling better (remission) may come and go.
That said, the damage RA causes in your joints is here to stay. If you don’t see a provider for RA treatment, the disease can cause permanent damage to your cartilage and, eventually, your joints. RA can also harm organs like your lung and heart.
A note from Cleveland Clinic
If you have rheumatoid arthritis, you may feel like you’re on a lifelong roller coaster of pain and fatigue. It’s important to share these feelings and your symptoms with your healthcare provider. Along with X-rays and blood tests, what you say about your quality of life will help inform your treatment. Your healthcare provider will assess your symptoms and recommend the right treatment plan for your needs. Most people can manage rheumatoid arthritis and still do the activities they care about. | ’re more likely to develop RA if you have a close relative who also has it.
Sex: Women and people designated female at birth are two to three times more likely to develop rheumatoid arthritis.
Smoking:Smoking increases a person’s risk of rheumatoid arthritis and makes the disease worse.
Obesity: Your chances of developing RA are higher if you have obesity.
Diagnosis and Tests
How is rheumatoid arthritis diagnosed?
Your healthcare provider may refer you to a physician who specializes in arthritis (rheumatologist). Rheumatologists diagnose people with rheumatoid arthritis based on a combination of several factors. They’ll do a physical exam and ask you about your medical history and symptoms. Your rheumatologist will order blood tests and imaging tests.
The blood tests look for inflammation and blood proteins (antibodies) that are signs of rheumatoid arthritis. These may include:
About 60% to 70% of people living with rheumatoid arthritis have antibodies to cyclic citrullinated peptides (CCP) (proteins).
Your rheumatologist may order imaging tests to look for signs that your joints are wearing away. Rheumatoid arthritis can cause the ends of the bones within your joints to wear down. The imaging tests may include:
In some cases, your provider may watch how you do over time before making a definitive diagnosis of rheumatoid arthritis.
What are the diagnostic criteria for rheumatoid arthritis?
Diagnostic criteria are a set of signs, symptoms and test results your provider looks for before telling you that you’ve got rheumatoid arthritis. They’re based on years of research and clinical practice. Some people with RA don’t have all the criteria. | yes |
Rheumatoid | Can smoking cause Rheumatoid Arthritis? | yes_statement | "smoking" can "cause" rheumatoid arthritis.. the act of "smoking" can lead to the development of rheumatoid arthritis. | https://www.niams.nih.gov/health-topics/rheumatoid-arthritis | Rheumatoid Arthritis | Health Topics | NIAMS | Rheumatoid Arthritis
Overview of Rheumatoid Arthritis
Rheumatoid arthritis (RA) is a chronic (long-lasting) autoimmune disease that mostly affects joints. RA occurs when the immune system, which normally helps protect the body from infection and disease, attacks its own tissues. The disease causes pain, swelling, stiffness, and loss of function in joints.
Additional features of rheumatoid arthritis can include the following:
It affects the lining of the joints, which damages the tissue that covers the ends of the bones in a joint.
RA often occurs in a symmetrical pattern, meaning that if one knee or hand has the condition, the other hand or knee is often also affected.
It can affect the joints in the wrists, hands, elbows, shoulders, feet, spine, knees, and jaw.
RA may cause fatigue, occasional fevers, and a loss of appetite.
RA may cause medical problems outside of the joints, in areas such as the heart, lungs, blood, nerves, eyes, and skin.
Fortunately, current treatments can help people with the disease to lead productive lives.
What happens in rheumatoid arthritis?
Doctors do not know why the immune system attacks joint tissues. However, they do know that when a series of events occurs, rheumatoid arthritis can develop. This series of events includes:
A combination of genes and exposure to environmental factors starts the development of RA.
The immune system may be activated years before symptoms appear.
The start of the autoimmune process may happen in other areas of the body, but the impact of the immune malfunction typically settles in the joints.
Immune cells cause inflammation in the inner lining of the joint, called the synovium.
This inflammation becomes chronic, and the synovium thickens due to an increase of cells, production of proteins, and other factors in the joint, which can lead to pain, redness, and warmth.
As RA progresses, the thickened and inflamed synovium pushes further into the joint and destroys the cartilage and bone within the joint.
As the joint capsule stretches, the forces cause changes within the joint structure.
The surrounding muscles, ligaments, and tendons that support and stabilize the joint become weak over time and do not work as well. This can lead to more pain and joint damage, and problems using the affected joint.
Who Gets Rheumatoid Arthritis?
You are more likely to get rheumatoid arthritis if you have certain risk factors. These include:
Age. The disease can happen at any age; however, the risk for developing rheumatoid arthritis increases with older age. Children and younger teenagers may be diagnosed with juvenile idiopathic arthritis, a condition related to rheumatoid arthritis.
Sex. Rheumatoid arthritis is more common among women than men. About two to three times as many women as men have the disease. Researchers think that reproductive and hormonal factors may play a role in the development of the disease for some women.
Family history and genetics. If a family member has RA, you may be more likely to develop the disease. There are several genetic factors that slightly increase the risk of getting RA.
Smoking. Research shows that people who smoke over a long period of time are at an increased risk of getting rheumatoid arthritis. For people who continue to smoke, the disease may be more severe.
Obesity. Some research shows that being obese may increase your risk for the disease as well as limit how much the disease can be improved.
Periodontitis. Gum disease may be associated with developing RA.
Lung diseases. Diseases of the lungs and airways may also be associated with developing RA.
Symptoms of Rheumatoid Arthritis
Common symptoms of rheumatoid arthritis include:
RA affects people differently. In some people, RA starts with mild or moderate inflammation affecting just a few joints. However, if it is not treated or the treatments are not working, RA can worsen and affect more joints. This can lead to more damage and disability.
At times, RA symptoms worsen in “flares” due to a trigger such as stress, environmental factors (such as cigarette smoke or viral infections), too much activity, or suddenly stopping medications. In some cases, there may be no clear cause.
The goal of treatment is to control the disease so it is in remission or near remission, with no signs or symptoms of the disease.
Rheumatoid arthritis can cause other medical problems, such as:
Joint pain at rest and when moving, along with tenderness, swelling, and warmth of the joint.
Joint stiffness that lasts longer than 30 minutes, typically after waking in the morning or after resting for a long period of time.
Joint swelling that may interfere with daily activities, such as difficulty making a fist, combing hair, buttoning clothes, or bending knees.
Fatigue – feeling unusually tired or having low energy.
Occasional low-grade fever.
Loss of appetite.
Rheumatoid arthritis can happen in any joint; however, it is more common in the wrists, hands, and feet. The symptoms often happen on both sides of the body, in a symmetrical pattern. For example, if you have RA in the right hand, you may also have it in the left hand.
RA affects people differently. In some people, RA starts with mild or moderate inflammation affecting just a few joints. However, if it is not treated or the treatments are not working, RA can worsen and affect more joints. This can lead to more damage and disability.
At times, RA symptoms worsen in “flares” due to a trigger such as stress, environmental factors (such as cigarette smoke or viral infections), too much activity, or suddenly stopping medications. In some cases, there may be no clear cause.
The goal of treatment is to control the disease so it is in remission or near remission, with no signs or symptoms of the disease.
Rheumatoid arthritis can cause other medical problems, such as:
Rheumatoid nodules that are firm lumps just below the skin, typically on the hands and elbows.
Anemia due to low red blood cell counts.
Neck pain.
Dry eyes and mouth.
Inflammation of the blood vessels, the lung tissue, airways, the lining of the lungs, or the sac enclosing the heart.
Lung disease, characterized by scarring and inflammation of the lungs that can be severe in some people with RA.
Causes of Rheumatoid Arthritis
Researchers do not know what causes the immune system to turn against the body’s joints and other tissues. Studies show that a combination of the following factors may lead to the disease:
Genes. Certain genes that affect how the immune system works may lead to rheumatoid arthritis. However, some people who have these genes never develop the disease. This suggests that genes are not the only factor in the development of RA. In addition, more than one gene may determine who gets the disease and how severe it will become.
Environment. Researchers continue to study how environmental factors such as cigarette smoke may trigger rheumatoid arthritis in people who have specific genes that also increase their risk. In addition, some factors such as inhalants, bacteria, viruses, gum disease, and lung disease may play a role in the development of RA.
Sex hormones. Researchers think that sex hormones may play a role in the development of rheumatoid arthritis when genetic and environmental factors also are involved. Studies show: | As the joint capsule stretches, the forces cause changes within the joint structure.
The surrounding muscles, ligaments, and tendons that support and stabilize the joint become weak over time and do not work as well. This can lead to more pain and joint damage, and problems using the affected joint.
Who Gets Rheumatoid Arthritis?
You are more likely to get rheumatoid arthritis if you have certain risk factors. These include:
Age. The disease can happen at any age; however, the risk for developing rheumatoid arthritis increases with older age. Children and younger teenagers may be diagnosed with juvenile idiopathic arthritis, a condition related to rheumatoid arthritis.
Sex. Rheumatoid arthritis is more common among women than men. About two to three times as many women as men have the disease. Researchers think that reproductive and hormonal factors may play a role in the development of the disease for some women.
Family history and genetics. If a family member has RA, you may be more likely to develop the disease. There are several genetic factors that slightly increase the risk of getting RA.
Smoking. Research shows that people who smoke over a long period of time are at an increased risk of getting rheumatoid arthritis. For people who continue to smoke, the disease may be more severe.
Obesity. Some research shows that being obese may increase your risk for the disease as well as limit how much the disease can be improved.
Periodontitis. Gum disease may be associated with developing RA.
Lung diseases. Diseases of the lungs and airways may also be associated with developing RA.
Symptoms of Rheumatoid Arthritis
Common symptoms of rheumatoid arthritis include:
RA affects people differently. In some people, RA starts with mild or moderate inflammation affecting just a few joints. However, if it is not treated or the treatments are not working, RA can worsen and affect more joints. This can lead to more damage and disability.
| yes |
Rheumatoid | Can smoking cause Rheumatoid Arthritis? | yes_statement | "smoking" can "cause" rheumatoid arthritis.. the act of "smoking" can lead to the development of rheumatoid arthritis. | https://ard.bmj.com/content/82/3/316 | Occupational inhalable agents constitute major risk factors for ... | Abstract
Objectives To assess the effects of occupational inhalable exposures on rheumatoid arthritis (RA) development and their interactions with smoking and RA-risk genes, stratifying by presence of anticitrullinated protein antibodies (ACPA).
Methods Data came from the Swedish Epidemiological Investigation of RA, consisting of 4033 incident RA cases and 6485 matched controls. Occupational histories were retrieved, combining with a Swedish national job-exposure matrix, to estimate exposure to 32 inhalable agents. Genetic data were used to define Genetic Risk Score (GRS) or carrying any copy of human leucocyte antigen class II shared epitope (HLA-SE) alleles. Associations were identified with unconditional logistical regression models. Attributable proportion due to interaction was estimated to evaluate presence of interaction.
Results Exposure to any occupational inhalable agents was associated with increased risk for ACPA-positive RA (OR 1.25, 95% CI 1.12 to 1.38). The risk increased as number of exposed agents increased (Ptrend<0.001) or duration of exposure elongated (Ptrend<0.001). When jointly considering exposure to any occupational inhalable agents, smoking and high GRS, a markedly elevated risk for ACPA-positive RA was observed among the triple-exposed group compared with those not exposed to any (OR 18.22, 95% CI 11.77 to 28.19). Significant interactions were found between occupational inhalable agents and smoking/genetic factors (high GRS or HLA-SE) in ACPA-positive RA.
Conclusions Occupational inhalable agents could act as important environmental triggers in RA development and interact with smoking and RA-risk genes leading to excessive risk for ACPA-positive RA. Future studies are warranted to assess preventive strategies aimed at reducing occupational hazards and smoking, especially among those who are genetically vulnerable.
Arthritis, Rheumatoid
Anti-Citrullinated Protein Antibodies
Smoking
Epidemiology
Data availability statement
Data are available on reasonable request. All data and codes are available on request to the corresponding authors (xia.jiang@ki.se).
This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https://creativecommons.org/licenses/by/4.0/.
Statistics from Altmetric.com
Request Permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
WHAT IS ALREADY KNOWN ON THIS TOPIC
Cigarette smoking has been shown to increase the risk of developing rheumatoid arthritis (RA), but little is known about the effects of occupational inhalable agents on RA.
WHAT THIS STUDY ADDS
Our results suggest that exposure to occupational inhalable agents increases the risk of developing RA and interacts with smoking and RA-risk genes leading to an excessive risk for anticitrullinated protein antibodies-positive RA.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY
Our study emphasises the importance of occupational respiratory protections, particularly for individuals who are genetically predisposed to RA.
Background
Rheumatoid arthritis (RA) is a chronic autoimmune joint disorder characterised by painful and disabling polyarthritis, affecting commonly 0.3%–1.0% of the global population.1 2 External exposures such as smoking have been recognised as important environmental risk factors for RA, while human leucocyte antigen class II shared epitope (HLA-SE) alleles constitute the major genetic risk factors.3–6 A strikingly 21-fold increased risk of developing anticitrullinated protein/peptide antibodies positive (ACPA-positive) RA has been reported for smokers carrying two copies of HLA-SE alleles, leading to the formulation of an aetiological hypothesis where autoimmunity to citrullinated autoantigens occurs after activation of HLA-SE restricted immunity to autoantigens generated in lungs by smoking.7–9 Notably, the observations forming the basis of this hypothesis are made only for the ACPA-positive subtype.7 10 11
Over recent years additional environmental inhalable exposures have been linked to risk for RA, including silica dust, asbestos and textile dust, whereas studies on effects of air pollution have yielded variable results.12 13 However, there is still a lack of knowledge of the impact on risk for RA from many different environmental exposures affecting airways that occur in occupational situations worldwide. Furthermore, the studies on inhalant–RA associations that exist have rarely considered personal smoking habits or genetic backgrounds in the same context and only in few cases subdivided RA in serologically defined subsets.
From this background and provided that occupational environmental airway exposures constitute potentially modifiable causes of RA, we set out to investigate the impact of such exposures on risk for the two major subtypes of RA, taking also smoking and genetic constitution into account. For this purpose, we used data from the large case–control study, Epidemiological Investigation of RA (EIRA), which enabled us to investigate the associations between multiple occupational inhalable exposures and risk of RA, as well as their interactions with smoking and genetic variants.
Methods
Study base
The EIRA is a Swedish population-based case–control study that comprises participants over the age of 18 in southern/central regions of Sweden. Cases were newly onset patients with RA diagnosed by a rheumatologist, based on the American College of Rheumatology (ACR) 1987 criteria or the more recently introduced ACR/the Grading of Recommendations Assessment, Development and Evaluation 2010 criteria. Controls were randomly selected from the nationwide population register shortly after case identification and matched on age, sex and residential area. The year of first symptom onset was registered for cases and taken as the index year for matched controls. Information regarding demographics, work history and lifestyles was collected by self-administrated questionnaires, and blood samples were collected for anticitrulline antibody test and genotyping.
During the period of 1996–2017, 4251 cases and 6934 controls participated in EIRA. After excluding participants who missed information on occupational history or important covariates, 4033 cases and 6485 controls were available for questionnaire data; approximately 3400 cases and 2800 controls had concomitant genetic data. Exclusion criteria are shown in figure 1A.
Patient and public involvement
No patients or members of the public were directly involved in the design or conduct of this study.
Exposure
The participants were asked to provide information on job titles, start year and end year for up to 14 working periods. To determine exposures to inhalable agents across different occupations and working periods, we applied a job-exposure matrix (JEM) developed for working conditions in Sweden, which contained assessment of prevalence and concentration of 47 inhalable agents.14 Details regarding the quantification of exposures are shown in figure 1B. We classified the exposures into binary variables as ever exposed versus never exposed (to a particular agent) with zero as cut-off. The participants who were possibly exposed to any of the 47 inhalable agents were classified as exposed to any agents. To ensure statistical power, we only retained agents with >50 exposed individuals in our agent-specific analysis, resulting in 32 agents. Our reference group consisted of individuals who were not exposed to any agents (0/47 agents in the JEM).
Covariates
Genetic risk score and HLA-SE alleles
We included participants of European ancestry for genetic analysis. To get an appropriate genetic metrics for a European population and avoid overlapping, we retrieved the genetic summary statistics for the European-ancestral subpopulations from the hitherto largest RA genome-wide association study (GWAS).15 We meta-analysed summary statistics of participating studies after excluding EIRA, which resulted in 13 264 RA cases and 42 879 controls. We then computed the Genetic Risk Score (GRS) for EIRA participants using LDpred2 software with weights from this RA meta-GWAS.16 Participants were classified into carrying high versus low genetic burden based on the median values of GRS among controls.
In addition to GRS, we complementarily incorporated and investigated the primary RA risk genes summarised as presence of HLA-SE alleles,17 18 with participants classified as carriers or non-carriers based on the presence of any copy of HLA-SE alleles.
Other covariates
Participants reported never smoked were classified as non-smokers, while those reported as current smokers, ex-smokers or non-regular smokers were classified as ever smokers. Alcohol consumption was defined as non-drinkers and ever drinkers. Body mass index (BMI) (kg/m2) was categorised into <20, 20–25 and >25. Levels of education were classified into primary education, secondary education and with a university degree. Residential areas were categorised into 16 counties. Age in years was included as a continuous variable. Sex was binary as male and female.
Statistical analysis
We first compared the basic characteristics of each of the two RA subtypes (ACPA-positive and ACPA-negative) with the controls, using t-tests for continuous variables and χ2 tests for categorical variables. We then estimated the association of exposure to each inhalable agent with the risk of developing overall RA as well as with ACPA-based subtypes through unconditional logistic regressions with adjustment for matching factors.
Occupational hazardous agents often coexist. To account for potential correlations among inhalable agents, we calculated Pearson’s correlation coefficients pair-wising all 32 agents and ‘clumped’ these agents with a significant P-threshold of 1.0×10−4 (0.05/496 pairs) and a correlation coefficient threshold of 0.4 (moderate correlation), through which a total of 16 independent collections of inhalable agents (therefore 16 index agents) were identified. These 16 index agents were used as main exposures in our subsequent analyses.
To understand the accumulated effect of inhalable agents, we classified participants into five groups based on their total numbers of exposures (exposed to 1, 2, 3, 4 or ≥5 of the 16 independent index agents) or quantiles of exposure duration (0–3.3, 3.3–8.0, 8.0–1.5, 1.5–24.0, 24.0–51.0 years of exposure to any agents). We evaluated an exposure–response relationship comparing each of the five exposed groups with the reference group (individuals not exposed to any agents).
To investigate the joint effect of inhalable agents, smoking and genetic predisposition (high GRS or carrying HLA-SE alleles), we categorised participants into seven groups based on their exposure status to any of the three factors (only exposed to one factor, only exposed to two factors or triple-exposed). We performed analysis comparing each of the seven exposed groups with the reference group (individuals not exposed to any occupational inhalable agents, non-smoker and with low GRS or non-HLA-SE-carriers). To explore the gene–environment (G×E) or E×E interaction effect among inhalable agents, smoking and genetic predisposition (high GRS or carrying HLA-SE alleles), we estimated the additive interaction defined as departure from the additivity of effects.19
All analyses were conducted using unconditional regression models, with age, sex, residential area, BMI, smoking, drinking and levels of education commonly included as covariates. All analyses were performed in RA overall as well as stratified into ACPA-based subtypes. For analysis involving genetic data, principal components 1-10 were additionally included to control for population stratification. To account for multiple testing, statistical significance was set at a stringent P-threshold of 1.6×10−3 (0.05/32), while suggestive significance was set at 1.6×10−3<P<0.05. More details for our methods were described in online supplemental methods.
Supplemental material
Results
Basic characteristics of RA cases and controls are shown in table 1. Compared with controls, both ACPA-positive and ACPA-negative cases were more likely to smoke, drink less, be overweight and be without a university degree. Compared with ACPA-negative cases, ACPA-positive cases were slightly younger, more likely to be women and smokers, had high GRS and more HLA-SE carriers. In terms of occupational inhalable agents, 73% of ACPA-positive cases and 72% of ACPA-negative cases were ever exposed, significantly higher than controls (67%).
When looking into each of the 32 inhalable agents (figure 2A and online supplemental table 1), we observed distinct association patterns in RA subtypes. The point estimates for all 32 agents in ACPA-positive RA were greater than the corresponding estimates in ACPA-negative RA. Specifically, 17 out of 32 agents were strongly associated with an increased risk of ACPA-positive RA (p<1.6×10−3, ORs raging from 1.25 to 2.38); meanwhile, none of the agents withstood Bonferroni correction (p<1.6×10−3) for ACPA-negative RA—the strongest association for ACPA-negative RA (in terms of significance) were found for quartz dust (p=2.0×10−3), followed by asbestos (p=5.8×10−3) and detergents (p=6.8×10−3).
Multiple occupational hazards are likely coexist in the same work environment, reflected by our clustering plot (figure 2B). After clumping, 16 collections of agents that were mutually independent to each other were identified (online supplemental figure 1). The risk of developing RA increased as the numbers of exposed agents (out of the 16) increased (Ptrend<0.001 for overall RA and both subtypes) (figure 3A and online supplemental table 3) or as duration of exposure (to any agents) elongated (Ptrend<0.001 for overall and ACPA-positive RA) (figure 3B and online supplemental table 4).
Exposure–response relationship between occupational inhalable agents and RA. (A) Participants were classified into exposed to 1, 2, 3, 4 or ≥5 agents out of the 16 independent agent collections and compared with non-exposed group (not exposed to any of the 47 agents). (B) Participants were classified into five subsets with exposure durations of 0–3.3, 3.3–8.0, 8.0–1.5, 1.5–24.0, 24.0–51.0 years (to any agents) and compared with non-exposed group (not exposed to any of the 47 agents). Results are shown for overall RA, as well as ACPA-positive and ACPA-negative subtypes. Estimates were adjusted for age, sex, residential area, smoking, alcohol drinking, education and body mass index. ACPA, anticitrullinated protein antibodies; RA, rheumatoid arthritis.
When simultaneously analysing inhalable agents (exposed to any agent), smoking and high GRS, participants who were triple-exposed had a higher risk of developing RA overall compared with those who were not exposed to any of the three factors (OR 5.50, 95% CI 4.23 to 7.14, p=1.8×10−37). For ACPA-positive subtype, the estimated OR was 18.22 (95% CI 11.77 to 28.19, p=8.2×10−39), notably higher than an OR of 1.69 (95% CI 1.25 to 2.29, p=6.1×10−4) observed in ACPA-negative subtype (table 2).
The combined effects of occupational inhalable agents (exposed to any agents), smoking and high GRS with risk for RA
More importantly, across the 16 specific collections of agents, a large risk of developing RA in the triple-exposed group (number of individuals in this group >30 for robustness) was observed for ACPA-positive subtype with ORs ranging from 18.0 to 45.1, while the estimates for ACPA-negative subtype were weaker with ORs ranging from 0.85 to 2.64 (see results for four common collections in figure 4, for all 32 agents in online supplemental table 4). When replacing GRS with HLA-SE, a similar pattern was observed—a strikingly increased risk in the triple-exposed group was found exclusively in ACPA-positive subtype, with ORs ranging from 8.6 to 25.9 (online supplemental table 6).
Discussion
Our study supports a general link between occupational inhalable agents and risk of RA, with clear restriction towards ACPA-positive rather than ACPA-negative RA and with higher ORs for risk in men than in women. We also observed an exposure–response relationship, in which the risk of ACPA-positive RA increased either with an increased duration or with an increased number of exposed agents. When taking smoking and genes into account, an 18-fold higher risk of developing ACPA-positive RA was observed in the triple-exposed group (any agent, smoking and high GRS) compared with the non-exposed reference group. Furthermore, a positive G×E interaction between inhalable agents and genetic predisposition was observed in our study, additionally supports a trigger role for the environmental exposures.
Inhalable exposures have long been proposed as important risk factors for RA, particularly for seropositive subtype.3 8 20 However, so far relatively few studies have investigated the inhalant–RA relationship, most of which only covered a limited number of exposures, rarely controlled for important confounders and usually lacked statistical power to stratify cases by seropositivity.12 Briefly, studies have reported silica, asbestos, textile dust, organic solvents, oil mist and pesticides to significantly affect RA risk.21–27 Our results, which comprehensively interrogated 32 agents summarising information from 10 518 individuals largely extend previous observations by providing novel dimensions. We successfully validated the hazardous role of quartz dust (silica), asbestos, toluene (organic solvents), oil mist and pesticides in RA, while additionally identified several common inhalable agents not investigated before in relation to RA, including detergents, carbon monoxide, pulp or paper dust, gasoline engine exhaust and welding fume. However, the effect of textile dust, reported as a risk factor in a Malaysian case–control study involving 910 female RA case and 910 age, sex matched controls,24 failed to be replicated in our Swedish population. Indeed, most individuals assessed as exposed to textile dust in our data worked as painting teachers, tailors and packers, in which both intensity and characteristics of textile exposure might differ from Malaysian textile workers.
Noteworthily, we observed a sex difference shown as exposure to occupational inhalable agents affect male patients with RA more severely than females. Indeed, men and women presented different exposure patterns.28 According to our data, men had an average longer duration of occupational exposure (11.6 vs 7.1 years, p<0.001) and were exposed to more agents considered hazardous. The top five exposures in men were detergents (percentage exposed: 33%), carbon monoxide (31%), stone and concrete (26%), iron (22%) and polycyclic aromatic hydrocarbons (17%), while the top exposures in women were detergents (51%), pulp or paper dust (5%) and carbon monoxide (4%). This underlying relationship was further confirmed in our exposure–response analysis showing that the risk of developing RA increased either with elongated exposure duration or with increased number of exposed agents. Taken together, our results corroborated with each other, emphasising occupational inhalable exposures as an important risk factor for RA development and reflecting a general effect of inhalants on RA pathogenesis as well as highlighting lung as an important site for triggering RA. Indeed, respiratory disorders, both acute and chronic, have been recognised as risk factors for RA, which might partially affect the link between occupational inhalable exposures and RA.29–32
Our study also reveals distinct inhalant–RA association patterns for ACPA-based subtypes, which extends previous epidemiological findings and further emphasise the effect of inhalable exposures possibly restricted in ACPA-positive RA.7 21 26 ACPA was found in the sputum of individuals who were seronegative but considered to be at risk for RA due to family history. In these individuals the ratio of autoantibody to total Ig was thus higher in sputum than in serum.33 Also in early ACPA-positive RA, levels of ACPA were higher in bronchoalveolar fluids than in serum.34 These evidence indicated a local production of RA-related autoantibodies, such as ACPA, in lungs and provided a possible explanation for the specific linkage of provocation by inhalable exposures to ACPA-positive RA. Typically, ACPA-positive RA has a worse prognosis with higher rates of erosive damage and is usually linked with more genetic and environmental risk factors as compared with ACPA-negative RA, which is believed to constitute a heterogeneous group of RA with so far unknown aetiologies.10 11 35 36
Calculation of additive interaction has been described as the most appropriate approach to identify ‘sufficient cause interactions’ and to inform on disease mechanism.37 38 We observed a significant interaction between exposure to asbestos, carbon monoxide, gasoline engine exhaust, and quartz dust and genetic predisposition for the risk of ACPA-positive RA. Notably though, high GRS combined with these agents contributed to a higher point-estimated risk of developing RA in the double-exposed subgroups than if combined with HLA-SE alleles. A possible explanation might be that the GRS which was constructed based on genome-wide genetic markers reflecting a larger part of RA heritability than HLA-SE and thereby has a better predictive ability. The higher-risk figures and interaction results for GRS as compared with HLA-SE may also indicate that GRS interacts with inhalable agents via molecular pathways not fully captured by investigations restricted to HLA-SE alleles.
Noteworthily, detergents (OR 1.27, 95% CI 1.13 to 1.42) and pulp or paper dust (OR 1.57, 95% CI 1.26 to 1.96), despite their strong primary associations with ACPA-positive RA, yielded APs close to 0 for the interaction with either high GRS (detergents: AP 0.00, 95% CI −0.17 to 0.17; pulp or paper dust: AP 0.02, 95% CI −0.33 to 0.36) or HLA-SE alleles (detergents: AP 0.07, 95% CI −0.10 to 0.25; pulp or paper dust: AP 0.06, 95% CI −0.3 to 0.42). These findings of different influences of gene–environment interactions between different inhaled agents imply that distinct pathogenetic pathways may be active after exposure to different inhaled occupational agents and that some of these pathways are different from those previously proposed for smoking.34 39
Despite the substantial advantages of our current study including its large sample size, multiple exposures, being able to account for personal smoking and genetic background as well as adjust for important confounders, we must acknowledge several limitations. First, information on occupation was retrospectively collected, which might introduce recall bias. We, however, expect such bias to be modest given our population-based design, recruitment of incident cases and the fact that occupational careers are important life events that are less likely to be neglected. Second, although JEM is a validated tool widely used to estimate job-exposure status in large-scale studies, it might lead to nondifferential misclassification, that is, participants who ever worked in any occupation assessed with a high probability of exposure (>50% in this study, not necessarily 100%) would be defined as exposed. However, such non-differential misclassification is expected to underestimate the associations meaning the true associations are even stronger. Third, because certain kinds of inhalable agents often coexist, it is difficult to identify the independent relationship for one of these agents with RA risk due to limited size of participants who were only exposed to one agent in this study. Finally, the sample size of ACPA-negative cases were relatively small, and further studies with larger samples are warranted to re-examine the potential associations between occupational inhalable agents and ACPA-negative RA.
To conclude, our study shows that inhalable, mainly occupational exposures act as important environmental risk factors in RA development, especially in ACPA-positive RA. The markedly increased risk for RA after exposure of smoking and occupational inhalable agents observed among individuals carrying genetic variants common in Swedish as well as in most Caucasian populations strongly suggests the implementation of broad preventive strategies such as quit smoking and mitigation of occupational hazards. Notably, it is likely that similar effects on major histocompatibility complex (MHC)–environmental interactions seen here for a Caucasian population may be present also in other, for example, Asian populations as MHC-class II smoking interactions and high risk for ACPA-positive RA has been described also for Asia.40 41 Emphasis on the assessment and implementation of preventive strategies is thus warranted in industries worldwide, something that may in the future also involve awareness of genetic vulnerability, for example, through family history or testing for genetic variants predisposing for RA. Overall, our data provide a novel and quite dramatic emphasis on the role of occupational exposures in the aetiology of seropositive RA, calling for extended measures to reduce these exposures as part of international collaborative efforts to reduce morbidities due to working life.
Data availability statement
Data are available on reasonable request. All data and codes are available on request to the corresponding authors (xia.jiang@ki.se).
Ethics statements
Patient consent for publication
Not applicable.
Ethics approval
This study was approved by ethical approval for EIRA granted by the Karolinska Institutet Ethics Committee, the Regional Stockholm Ethics Committee, and the Central Swedish Ethics Authority (Etikprövningnämnden). Participants gave informed consent to participate in the study before taking part.
. A new model for an etiology of rheumatoid arthritis: smoking may trigger HLA-DR (shared epitope)-restricted immune reactions to autoantigens modified by citrullination. Arthritis Rheum2006;54:38–46.doi:10.1002/art.21575pmid:http://www.ncbi.nlm.nih.gov/pubmed/16385494
. Five amino acids in three HLA proteins explain most of the association between MHC and seropositive rheumatoid arthritis. Nat Genet2012;44:291–6.doi:10.1038/ng.1076pmid:http://www.ncbi.nlm.nih.gov/pubmed/22286218
. Silica exposure among male current smokers is associated with a high risk of developing ACPA-positive rheumatoid arthritis. Ann Rheum Dis2010;69:1072–6.doi:10.1136/ard.2009.114694pmid:http://www.ncbi.nlm.nih.gov/pubmed/19966090
. Structural changes and antibody enrichment in the lungs are early features of anti-citrullinated protein antibody-positive rheumatoid arthritis. Arthritis Rheumatol2014;66:31–9.doi:10.1002/art.38201pmid:http://www.ncbi.nlm.nih.gov/pubmed/24449573
. Effects of smoking and shared epitope on the production of anti-citrullinated peptide antibody in a Japanese adult population. Arthritis Care Res2014;66:1818–27.doi:10.1002/acr.22385pmid:http://www.ncbi.nlm.nih.gov/pubmed/24942650
Footnotes
Correction notice This article has been corrected since it was first published. The open access licence has been updated to CC BY.
Contributors All authors have made substantial contributions to the conception and design of this study, analysis, interpretation of the results and drafting and revising the manuscript. XJ is the guarantor of the study.
Funding The study was supported by the fundings from the Swedish Research Foundation for Health, Working Life and Welfare, the Swedish Research Council, the AFA foundation, Region Stockholm, King Gustaf V’s 80-year foundation, the Swedish Rheumatic Foundation. We would like to thank the participants in this study as well as the clinicians and nurses in the EIRA study group.
Competing interests None declared.
Patient and public involvement No patients or members of the public were directly involved in the design or conduct of this study.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise. | All data and codes are available on request to the corresponding authors (xia.jiang@ki.se).
This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https://creativecommons.org/licenses/by/4.0/.
Statistics from Altmetric.com
Request Permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
WHAT IS ALREADY KNOWN ON THIS TOPIC
Cigarette smoking has been shown to increase the risk of developing rheumatoid arthritis (RA), but little is known about the effects of occupational inhalable agents on RA.
WHAT THIS STUDY ADDS
Our results suggest that exposure to occupational inhalable agents increases the risk of developing RA and interacts with smoking and RA-risk genes leading to an excessive risk for anticitrullinated protein antibodies-positive RA.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY
Our study emphasises the importance of occupational respiratory protections, particularly for individuals who are genetically predisposed to RA.
| yes |
Rheumatoid | Can smoking cause Rheumatoid Arthritis? | yes_statement | "smoking" can "cause" rheumatoid arthritis.. the act of "smoking" can lead to the development of rheumatoid arthritis. | https://arthritis-research.biomedcentral.com/articles/10.1186/ar2751 | Rheumatoid arthritis and smoking: putting the pieces together ... | Rheumatoid arthritis and smoking: putting the pieces together
Abstract
Besides atherosclerosis and lung cancer, smoking is considered to play a major role in the pathogenesis of autoimmune diseases. It has long been known that there is a connection between rheumatoid factor-positive rheumatoid arthritis and cigarette smoking. Recently, an important gene–environment interaction has been revealed; that is, carrying specific HLA-DRB1 alleles encoding the shared epitope and smoking establish a significant risk for anti-citrullinated protein antibody-positive rheumatoid arthritis. We summarize how smoking-related alteration of the cytokine balance, the increased risk of infections (the possibility of cross-reactivity) and modifications of autoantigens by citrullination may contribute to the development of rheumatoid arthritis.
Introduction
It has long been known that there is a connection between seropositive rheumatoid arthritis (RA) and smoking. The exact underlying mechanism, however, has only been speculated.
Cigarette smoking is one of the major environmental factors suggested to play a crucial role in the development of several diseases. Disorders affecting the great portion of the population, such as atherosclerosis, lung cancer or cardiovascular diseases, are highly associated with tobacco consumption. More recently, it has been reported that smoking is involved in the pathogenesis of certain autoimmune diseases such as RA, systemic lupus erythematosus, systemic sclerosis, multiple sclerosis and Crohn's disease.
Firstly, Vessey and colleagues described an association between hospitalization due to RA and cigarette smoking, which was an unexpected finding of their gynecological study [1]. Since then several population-wide case–control and cohort studies have been carried out [2]. For example, a population-based case–control study in Norfolk, England showed that ever smoking was associated with a higher risk of developing RA [3]. Only an early Dutch study from 1990 involving female RA patients (control patients with soft-tissue rheumatism and osteoarthritis) reported that smoking had a protective effect in RA, albeit they only investigated recent smoking and their controls were not from the general population [4]. Investigations have elucidated that many aspects of RA (rheumatoid factor (RF) positivity, severity, and so forth) can be linked to smoking. Recent data suggest that cigarette smoking establishes a higher risk for anti-citrulli-nated protein antibody (ACPA)-positive RA. In the present paper we attempt to give a thorough review of this field, concerning the main facts and hypotheses in the development of RA in connection with smoking.
Smoking and immunomodulation
Smoking in general
Smoking is considered to have a crucial role in the pathogenesis of many diseases and, as a significant part of the population smokes, it is one of the most investigated and well-established environmental factors. Cigarette smoke represents a mixture of 4,000 toxic substances including nicotine, carcinogens (polycyclic aromatic hydrocarbons), organic compounds (unsaturated aldehydes such as acrolein), solvents, gas substances (carbon monoxide) and free radicals [5]. Many data suggest that smoking has a modulator role in the immune system contributing to a shift from T-helper type 1 to T-helper type 2 immune response; pulmonary infections are increased, immune reactions against the invasion of microorganisms are depleted (see below), and (lung) tumor formation is augmented.
Exposure to cigarette smoke results in the depression of phagocytic and antibacterial functions of alveolar macrophages (AMs) (Table 1) [6, 7]. Although AMs from smokers are able to phagocytose intracellular bacteria, they are unable to kill the bacteria – which consequently implies the deficiency of these cells in smokers [8]. Cigarette smoke condensate, administered to mice, leads to a decrease in primary antibody response [9]. Chronic smoking results in T-cell anergy by impairing the antigen receptor-mediated signaling [10].
Smoking induces a decline in TNF production, which is supported by several data in the literature. In the work of Higashimoto and colleagues, in vivo exposure to tobacco smoke caused a significant decrease in the production of TNFα by AMs after lipopolysaccharide (LPS) stimulation. In vitro exposure of AMs to tobacco smoke extracts (water-soluble extracts) also caused a drop in the secretion of TNFα with stimulation of LPS [11].
Owing to chronic smoking, AMs from rats significantly increase the generation of superoxide anion and release high amounts of TNFα after smoking sessions; when challenged with LPS, however, even though a more pronounced cytokine secretion can be found, it is not as marked as in the control groups [12]. It therefore seems that macrophages of experimental animals are activated, but at the same time are somehow depressed, and respond less to LPS.
In line with the abovementioned observations, the capacity of AMs of healthy smokers to release TNFα, IL-1 and IL-6 is significantly decreased [13, 14].
Nicotine
Data on alterations of macrophage functions by nicotine (such as pinocytosis, endocytosis, microbial killing and reducing TNFα secretion induced by LPS) date back more than 40 years [6]. It is known that various kinds of immune cells carry nicotinic and muscarinic acetylcholine receptors (T cells and B cells), through which the nervous system and also the immune system itself can modulate and coordinate the proliferation, differentiation and maturation of immune cells [10]. It is suggested that the major portion of acetylcholine in the circulating blood originates from T-cell lines. The thymic epithelium as well as T cells in the thymus express nicotinic acetylcholine receptor, as do mature lymphocytes [10]. Chronic smoking leads to T-cell anergy, while its acute effects are primarily mediated via the activation of the hypothalamic–pituitary–adrenal axis [10, 15]. The nicotinic acetylcholine receptor is involved in the suppression of antimicrobial activity and cytokine responses (downregulation of IL-6, IL-12, and TNFα, but not that of the anti-inflammatory cytokine IL-10) of AMs [16].
In recent work of Borovikova and colleagues, acetylcholine significantly attenuated the release of cytokines (TNF, IL-1 and IL-6, but not anti-inflammatory IL-10) in LPS-induced human macrophage cultures [17]. Particularly the α7 subunit, mediated by the inhibition of NF-κB, has a role in the alteration of cytokine responses [18]. Nicotine also affects the quality of antigen presentation: in mature dendritic cells, nicotine exposure decreases the production of proinflammatory T-helper type 1 IL-12, and decreases the capacity of dendritic cells to induce antigen-presenting cell-dependent T-cell responses. Other reports contradict this, however, suggesting that the effect of nicotine on mature dendritic cells is proinflammatory in nature. Moreover, nicotine alters various neutrophil functions; for example, attenuates super-oxide anion production [10].
All of these data suggest an immunosuppressive effect of nicotine on the immune system, inhibiting various functions of almost all immune cell types.
Other organic compounds
Hydroquinone is found in high concentrations in cigarette smoke, causing prominent suppression in the production of IL-1, IFNγ and TNFα in human peripheral blood macrophages [19]. Hydroquinone seems to also significantly inhibit IFNγ secretion in lymphocytes in a dose-dependent manner. In addition, hydroquinone treatment results in the reduction of IFNγ secretion in effector CD4+ T cells and T-helper type 1-differentiated CD4+ T cells. These findings provide evidence that hydroquinone may suppress immune responses and contribute to the increased incidence of microbial infections caused by cigarette smoking [20].
Besides hydroquinone, other organic compounds are also present in cigarette smoke. Certain data suggest that unsaturated aldehydes such as acrolein and crotonaldehyde, contained in the aqueous phase of cigarette smoke extract, can evoke the release of neutrophil chemoattractant IL-8 and TNFα in human macrophages [21], which can be inhibited by N-acetyl-cysteine or glutathione monoethyl ester. Endogenous unsaturated aldehydes are found in high amounts in chronic obstructive pulmonary disease patients and are involved in the promotion of inflammation, so the exogenous analogues in smoke may have similar effects impeded by glutathione derivates.
Oxidative stress
Chronic smoking as a repetitive trigger causes marked oxidative stress in the body [5], which might be responsible for a constant inflammatory process. High amounts of exogenous free radicals contained in smoke can react to endogenous nitrogen monoxide, producing the harmful peroxy nitrite and decreasing the protective effect of nitrogen monoxide. Smoke also induces the production of endogenous free radicals; for example, reactive oxygen species (peroxide, superoxide, hydroxyl ion). Oxidative free radicals can lead to a wide variety of damages in cells via lipid peroxidation as well as via the oxidation of DNA and proteins, resulting in apoptosis. Several enzymes (for example, α1-protease inhibitor) containing redox-sensitive amino acids (cysteine or methionine) in their catalytic site can lose their activity or can undergo conformational changes. This may cause a higher susceptibility for degradation or may challenge the equilibrium of proteases/protease inhibitors.
The oxidant/antioxidant imbalance may activate redox-sensitive transcription factors such as NF-κB and activator protein-1, which regulate the genes of proinflammatory mediators (IFNγ) and protective antioxidants [22]. Normally, TNF can lead alternatively to activation of NF-κB or to apoptosis, depending on the metabolic state of the cell. Nicotine, as mentioned above, reduces TNF release of AMs and consequently promotes less NF-κB activation through TNF; however, the increased oxidative stress would permit and contribute to NF-κB activation.
In accordance with this observation, mild exposure to cigarette smoke can induce NF-κB activation in lymphocytes through the increase in oxidative stress and the reduction in the intracellular glutathione levels [23]. Vapor-phase cigarette smoke can increase the detachment of alveolar epithelial cells and decrease their proliferation. Furthermore, these cells show a higher susceptibility for smoke-induced cell lysis. Reduced glutathione seems to protect against the effects of cigarette smoke exposure, and the depletion of intracellular glutathione, produced by smoke condensates, enhances cell injury [24]. It is intriguing that there is a strong association between RA, smoking and the GSTM1 (the enzyme involved in glutathione production) null genotype [25]. The polymorphisms of receptor activator of NF-κB (see below) have also been linked to RA [26], which indicates that free radicals in smoke may contribute to the pathological chain of RA development.
It is noteworthy that PAD enzyme isoforms contain highly conserved cysteine in their active site, which plays a crucial role in the catalysis process. It has been shown that agents acting on cysteine sulfhydryl groups via binding them covalently can inactivate the enzyme, while reduced compounds can enhance its activity [28]. Free radicals in smoke produce an oxidative milieu, which may promote the formation of disulfide groups in the active site of the enzyme and may also have a disadvantageous impact on PAD. On the contrary, PAD expression and activity are increased in the lungs of smokers [29] – the explanation for this might be that PAD is originally located intracellularly, and citrullinated proteins may be released into the extracellular matrix after apoptosis.
Anti-estrogenic effect
Another striking phenomenon is the estrogen–smoke interaction in regulating PAD genes. PAD2 expression is increased in bronchoalveolar lavage of smokers, compared with nonsmokers [29]. The expression of PAD2 and PAD4 is also elevated in the synovium of RA patients. The expression of PAD (type 4) enzymes is dependent on estrogens [27]. Smoking, however, has an anti-estrogenic effect through the formation of inactive 2-hydroxy catechol estrogens [30], which would counteract PADs.
These statements suggest that the anti-estrogenic effect of smoking may not have as much importance as its other pleiotropic roles (immunomodulation, activation of redox-sensitive factors, and so forth) in the contribution to the development of ACPA + RA considering the estrogen dependence of the PAD enzyme.
Elevation of serum fibrinogen
Fibrinogen is mainly involved in blood coagulation and inflammation. The Framingham Study has revealed that smokers have higher levels of serum fibrinogen [31]. The citrullinated form of fibrin can be found in RA synovial tissue co-localizing with citrullinated autoantibodies [32]. It has been reported that the polymerization of citrullinated fibrinogen catalyzed by thrombin is impaired, suggesting that the function and antigenicity of citrullinated proteins are somewhat altered, which may potentially contribute to proinflammatory responses and autoimmune reactions in the joints [33].
Smoking and aspects of RA
Genetics
Genetics of RA
RA is considered to have a complex etiology: both genetic and environmental factors contribute to the disease development [26, 34, 35]. The genetic component of RA is widely investigated [36]: the strongest gene association is considered to be the one with the human leukocyte antigen (HLA) region, particularly the HLA-DRB1 genes accounting for about two-thirds of the genetics of RA. Certain HLA-DRB1 alleles (DRB1*0401, DRB1*0404, DRB1*0405, DRB1*0408, DRB1*0101, DRB1*102, DRB1*1001 and DRB1*1402), encoding the so-called shared epitope (SE) at amino acid positions 70 to 74 in the third hypervariable region of the DRB1 molecule, are associated with a higher susceptibility for RA [26].
Another significant association of RA is with the polymorphism of the protein tyrosine phosphatase nonreceptor 22 (PTPN22) gene. PTPN22 is an intracellular protein expressed in hematopoietic cells; it sets the threshold of T-cell receptor signaling [37]. PTPN22 is therefore likely to be a general risk factor for the development of autoimmunity. Certain functional variants (for example, R620W, 1858 C/T) of PTPN22 have been shown to confer a moderate risk for seropositive RA [38]. In addition, a significant interaction between PTPN22 and smoking (>10 pack-years) has been observed in a case–control study [39]. Other studies, however, have failed to confirm this observation.
Association studies implicate the role of several other genes, including TNF receptor 2 (TNFR2), solute carrier family 22, member 4 (SLC22A4), runt-related transcription factor 1 (RUNX1) and the receptor activator gene of NF-κB (TNFRSR11A) [26]. Furthermore, PADI4 polymorphisms have been found to confer a risk for RA only in Japanese and Korean populations, but not European populations [40].
RA therefore can be divided into two subsets of disease entities (ACPA-positive RA and ACPA-negative RA), which are likely to be genetically distinct: HLA-DRB1 SE alleles and PTPN22 are restricted to ACPA-positive RA, while genes such as interferon regulatory factor 5 (IRF-5) and C-type lectin seem to confer risk for ACPA-negative RA [26].
Genetics of smoking
Smoking as a chronic habit is genetically determined to some extent. The major candidate genes associated with smoking are those of cytochrome P450 enzymes, which play a substantial role in nicotine metabolism, and also those of dopamine receptors influenced by nicotine in the mesocorticolimbic dopaminergic reward pathways of the brain. A significant linkage was found between the ever–never smoking trait and chromosome 6 [41], which is associated with the HLA genes. A Hungarian group has determined the polymorphisms of the MHC class III genes in coronary artery disease patients versus healthy individuals with defined smoking habits [41]. A significant association between ever smoking (past and current smokers) and a specific MHC haplotype (the TNF2 allele of the promoter of TNFα) has been observed. More attempts were made to find a correlation between TNF promoter polymorphisms and RA, although most of them failed [42]. These results suggest that genes (MHC classes) determining different aspects of smoking behavior do not seem to predispose for RA; that is, the genetics of these two entities, the habit and the disease are unlikely to have a similar genetic root.
Smoking is a risk factor for RA in shared epitope carriers
According to a Swedish population-based case–control study, there is a gene–environment interaction between smoking and the HLA-DRB1 SE genotype [43]. The relative risk of RA was extremely high in smokers carrying single SE alleles (7.5) or double SE alleles (15.7). Nevertheless, neither smoking nor SE alleles, nor the combination of these factors, have increased the risk of developing seronegative RA [43]. The case–control study of the Iowa Women's Health Study involving postmenopausal women has indicated a strong positive association of smoking, SE positivity and GSTM1 null genotype with RA [25].
Smoking, seropositivity and disease activity
Smoking and seropositivity
A Finnish population screening has showed an association between RF and smoking, but they have not investigated RA [44]. In another study, a positive correlation was observed between smoking and RF levels; particularly, IgA RF was found to account for more severe disease [45]. Smoking confers risk for only the seropositive form of RA [46], suggesting that the two disease entities may have different pathomechanisms.
Certain studies support the fact that there is an association between smoking and RA only in men, but not in women [47] –-yet many other reports contradict this suggestion [48]. A case–control study from Sweden has found that smokers of both sexes have an increased risk of developing seropositive RA but not seronegative RA [49].
Smoking intensity and RA
Many attempts have been made to clarify how smoking history (duration of smoking in years or the intensity of smoking per day) influences the development of RA.
A population-based case–control study of RA in the United States showed that women with 20 pack-years or more of smoking (number of pack-years = number of cigarettes smoked per day × number of years smoked/20) had a relative risk for RA compared with never-smokers [48]. Similarly, a study of female health professionals has showed that women smoking ≥ 25 cigarettes/day for more than 20 years (>25 pack-years) experienced an increased risk of RA [50]. A strong association has been found between RA and heavy cigarette smoking (history of 41 to 50 pack-years), but not with smoking itself [51]. The smoking intensity (number of cigarettes/day), however, was not associated with RA after adjusting for duration of smoking, which suggests that it is the duration of smoking and not the intensity that confers risk for RA. Yet, in a prospective female cohort in
Iowa, both factors of smoking were found to be associated with RA, and were observed only in current smokers and in those ever-smokers who quit 10 years or less prior to the study [52]. Similarly, in the prospective Nurses Health Study both smoking intensity and duration were directly related to risk of RA, with prolonged increased risk after smoking cessation [53]. A case–control study of Sweden has reported that the increased risk for RA is established after a long duration of smoking (≥ 20 years; the intensity was moderate) and might be sustained for several years (10 to 20 years) after smoking cessation [49].
To summarize, it seems that both smoking duration and intensity may be associated with the development of RA. The duration might be more decisive (≥ 20 years), however, and at least 10 years of smoking cessation is needed to reduce the RA risk.
RA is characterized by antibodies including RF and ACPA. These data may indicate that a long duration of smoking with appropriate intensity may cause permanent immunomodulation and subsequent antibody production of memory cells, resulting in a steady state of pathological antibodies. After an unspecified time (about 10 years) of smoking cessation, these cells may disappear from the body.
Smoking and disease severity
Clinical evaluations of patients at the University of Iowa have revealed that cigarette smoking (especially ≥ 25 pack years) was significantly associated with RF positivity, radiographic erosions and nodules [54]. In another study there was a correlation between heavy smoking (≥ 20 pack-years) and rheumatoid nodules, a higher Health Assessment Questionnaire score, a lower grip strength and more radiological joint damage, suggesting the adverse effect of smoking on progression, life quality and functional disability [55]. Some reports support that smoking can increase extraarticular manifestations (rheumatoid nodules, interstitial pulmonary disease, rheumatoid vasculitis) [56–58].
In the work of Manfredsdottir and colleagues, a gradual increase in disease activity was observed from never, former and current smokers defined by the number of swollen and tender joints and the visual analogue scale for pain, but smoking status did not influence the radiological progression [59]. In a cohort of Greek patients with early RA, cigarette smoking was associated with increased disease activity and severity in spite of the early treatment [60]. Only one study found reduced radiographic progression and generally more favorable functional scores among heavy smokers [61]. The recent results of Westhoff and colleagues have revealed that smoking does not influence the Disease Activity Score or radiographic scores, yet smokers need higher doses of disease-modifying antirheumatic drugs, which may indicate reduced potency of these drugs due to smoking or higher disease activity that can be controlled by only high doses of drugs [62].
One can conclude that smoking influences the course of RA in a negative way, although its extent differs in the various studies. Therefore it is essential to draw patients' attention to the expected beneficial effect of smoking cessation.
Smoking and anti-cyclic citrullinated proteins
Recent data have revealed that smoking is highly associated with ACPA-positive RA (Table 2). The evaluation of incident cases of arthritis (undifferentiated arthritis and RA) has revealed that tobacco exposure increases the risk of anti-cyclic citrullinated protein (anti-CCP) antibodies (see information about anti-CCPs below) only in SE-positive patients [63]. In a national case–control study, tobacco smoking was related to an increased risk of anti-CCP-positive RA [64]. The investigation of consecutive sera of RA patients in a rheumatology clinic has shown that anti-CCP titers were associated with tobacco exposure [65].
Table 2 Population studies of RA investigating the association of smoking and anti-CCPs
In a case–control study involving patients with early-onset RA, Klareskog and colleagues found that previous smoking is dose-dependently associated with occurrence of anti-CCPs in RA patients. A major gene–environment interaction was also observed between smoking and HLA-DR SE genes: the presence of double copies of SE alleles confers about 20-fold risk for anti-CCP-positive RA in smokers [66]. A nationwide case–control study involving known and recently diagnosed RA patients conducted in Denmark has also proved strong gene–environment effects: there was an increased risk for anti-CCP-positive RA in heavy smokers with homozygote SE alleles [67].
In the study of the Leiden Early Arthritis Clinic, the HLA-DRB1*0401, HLA-DRB1*0404, HLA-DRB1*0405 or HLA-DRB1*0408 SE alleles conferred the highest risk of developing anti-CCP antibodies, and the smoking-SE interaction was highest in cases of HLA-DRB1*0101 or HLA-DRB1*0102 and HLA-DRB1*1001 SE alleles [68]. The same clinic has confirmed that anti-CCP-positive RA patients, who are current or former tobacco smokers, show a more extensive anti-CCP isotype usage compared with nonsmoker anti-CCP-positive patients; these observations were also valid for SE-negative RA patients [69]. In a French population of RA patients (one-half of them were multicase families), the presence of at least one SE allele (especially the DRB1*0401 allele) was related to the presence of anti-CCP antibodies [70]; smoking was associated with anti-CCP antibodies only in the presence of SE, and the cumulative dose of cigarette smoking was linked to the anti-CCP antibody titers.
A case-only analysis of three North American RA cohorts – RA patients from the North American Rheumatoid Arthritis Consortium (NARAC) family collection, from the National Inception Cohort of Rheumatoid Arthritis Patients, and from the Study of New Onset Rheumatoid Arthritis (SONORA) – has shown an association between smoking and anti-CCP in the NARAC and the National Inception Cohort, but not in the SONORA [71]. The SE alleles correlated with anti-CCP in all cohorts. Only the analysis of the NARAC cohort provided some evidence, however, for gene–environment interaction between smoking and SE alleles in anti-CCP-positive RA. In a study of African Americans with recent onset of RA, there was no association between smoking, anti-CCP antibody, IgM-RF or radiographic erosions [72]. A recent report comparing three large case–control studies – the Swedish Epidemiological Investigation of Rheumatoid Arthritis study, the NARAC study, and the Dutch Leiden Early Arthritis Clinic study – has reinforced the previous results [73]; namely, the association of smoking, HLA-DRB1 SE alleles and anti-CCP-positive RA. No interaction was found between PTPN22 R620W and smoking, however, indicating that smoking may have disadvantageous effects only in genetically susceptible individuals (for example, those carrying SE genes).
To conclude, these data suggest there may be an association between smoking, SE alleles and ACPA-positive RA. Further environmental and genetic factors (because the studies involving Americans show a more complex picture of RA risk factors), however, should also be considered.
Anti-citrullinated protein antibodies and citrullination
A long time ago RA sera were revealed to specifically react to filaggrin (found physiologically in keratin), which has been proven to be a citrullinated protein; however, light has been shed on the importance of citrullinated proteins only in recent years. Commercial kits are nowadays available to detect ACPAs: these antibodies react to synthetic CCPs – hence the name anti-CCPs. ACPAs are markedly specific for RA – only a small percentage of the general population carries them [74]. Antibodies (for example, anti-filaggrin) against citrullinated proteins – such as vimentin, fibrinogen, type II collagen, alfaenolase – usually arise several years prior to disease onset [74].
Citrullination is catalyzed by PADs dependent on a high calcium concentration. Five PAD isoforms (PAD1, PAD2, PAD3, PAD4 = 5, PAD6) are currently distinguished. Proteins lose specific positive charges through deimination (arginine → citrulline) and can change conformation, becoming more susceptible for degradation [75]. Physiologically, citrullination takes place in the epidermis and the central nervous system. Pathologically, an increased citrullination has been observed in the lining and sublining of joints and also in extraarticular regions in RA [74]. Citrullination is not specific for RA, however – other rheumatologic diseases with synovitis, including inflammatory osteoarthritis, reactive arthritis, undifferentiated arthritis, gout and even trauma, show the presence of citrullinated proteins [76]. The highly specific ACPAs are therefore the results of factors other than local inflammation, involving genetic and environmental factors. Only PAD2 and PAD4 isotypes are expressed in the synovium of RA patients (and also other arthritides) [77]. Their sources are probably inflammatory cells; for example, dying human macrophages and lymphocytes produce citrullinated vimentin, which, if released into the extracellular matrix of RA synovium, can specifically react with sera of RA patients.
Anti-CCPs are highly specific for RA, but they are found in 5 to 13% of patients with psoriatic arthritis [78] and also a minority of patients with primary Sjögren syndrome have an elevated anti-CCP titer, which is linked to the presence of synovitis [79]. Whether smoking confers a risk for the development of anti-CCPs in otherwise healthy individuals has not been investigated, but increased protein citrullination can be seen in the bronchoalveolar lavage of healthy smokers [29].
Smoking is associated with several autoimmune diseases such as systemic lupus erythematosus, primary biliary cirrhosis or multiple sclerosis, where similar gene–environment interactions may exist – the knowledge gained from research into these diseases could also help in the understanding of RA. For example, Moscarello and colleagues have proposed that citrullinated myelin basic proteins may have a crucial role in the pathogenesis of multiple sclerosis [80]: as in RA due to citrullination, myelin basic protein may become more susceptible for degradation by metalloproteases. In primary biliary cirrhosis, celiac disease or systemic lupus erythematosus, antibodies against self-enzymes involved in protein modification (deamidation, carboxylation, glycolysation) also exist, like the anti-PAD antibodies in RA (see later).
The connection of smoking, lung cancer, TNF and RA
It is well known that smoking has a pivotal role in the development of lung cancer. Smoke contains several carcinogens, leading to severe DNA damage via adduct formation and subsequently altered gene function. Contact-mediated cytostasis of tumor cells is also decreased by AMs of smokers [13]. As mentioned in a previous section, components in smoke have significant immune modulator effects (they alter the functions of T cells and B cells, macrophages, dendritic cells and neutrophils) on several acting points involving the reduction of the production of TNFα. Apart from the direct cytotoxic effects of TNFα against tumors, its antitumor activities may involve activation of different neutrophil functions, alteration of endothelial cell functions and increased production of IL-1. As a consequence, the inhibition of TNFα (as an anti-tumor agent) via smoke components may contribute to (lung) cancer formation besides the crucial effects of direct carcinogens found in smoke.
In the pathogenesis of RA, TNFα plays a key role considering the joint and bone damage. Increased levels of TNFα can be measured at the sites of inflammation. Moreover, transgenic mice expressing high levels of TNFα develop RA-like arthritis. In an animal model of collagen-induced arthritis, the inhibition of TNFα led to the amelioration of disease course. Later, extensive multicentric studies proved the beneficial effect of TNF blockage in RA [81], and nowadays TNF antagonists are widely used. In RA patients who smoke, an elevated ratio of TNFα/soluble TNF receptor released from activated T cells can be seen – which may contribute to the increased TNFα activity observed in RA. The ratio is related to the extent of smoking – sustained even after smoking cessation – proving why smoking intensity and duration have an impact on the development and course of RA [82].
Considering the TNFα-lowering effect of cigarette smoking, one could also suggest that TNFα should have beneficial effects on RA, even though the opposite is probably true. Other pleiotropic factors (oxidative stress, infections, citrullination) of smoke components rather than the TNF antagonism alone, evoked by nicotine, may therefore be the main susceptibility factors of disease development. To support this hypothesis, in both types of inflammatory bowel disease (ulcerative colitis and Crohn's disease), in which smoke has an opposite role, TNF antagonists are beneficial and are crucial components of the therapeutic repertoire.
Nowadays, three TNF antagonists exist – etanercept, a soluble fusional receptor; infliximab, a chimeric monoclonal antibody; and adalimumab, a completely human monoclonal antibody – and two other TNF antagonists (certolizumab and golimumab) are in clinical development. There are concerns about using biological agents, however, as the incidence of malignancies, especially lymphomas, may be increased compared with the normal population.
The increased proliferative drive of immune cells resulting in autoantibody formation and disease severity rather than TNF antagonism or disease-modifying antirheumatic drugs (methotrexate) seems to be responsible for the elevated lymphoma risk, which is supported by the recent analysis of the Swedish Biologics Register [83]. On the contrary, a previous meta-analysis of randomized trials of anti-TNF therapy has revealed a dose-dependent increased risk of malignancies in RA patients treated with anti-TNF antibodies [84]. In conclusion, patients treated with TNF antagonists should be closely followed regarding malignancies.
RA, infection and citrullination
Data suggest that smoking has immunosuppressive effects through the various substances contained in cigarette smoke, among which nicotine has the most substantial role (Figure 1). Nicotine can enter the bloodstream through the alveolar compartment-endothelial barrier, and then may reach different parts of the body, including lymphoid tissues, where it may have systemic immunomodulator effects and may act through the nicotinic receptors of the autologuous nervous system. Owing to immunosuppression evoked by smoke, infections are increased not only in the respiratory tract but in other regions of the body.
Superantigens of specific bacteria (Streptococcus, Staphylococcus) and viruses (Epstein–Barr virus (EBV)) bypass the processing of antigen-presenting cells through directly binding to MHC II molecule T-cell receptors outside the conventional antigen-specific variable chains, initiating massive T-cell activation (up to 20% of total). In addition, they may utilize not only the T-cell receptor pathways but also other pathways [85]. A wide repertoire of T cells may be activated due to superantigens, and also those cells reactive to citrullinated proteins or autodeiminated PADs (see below) in the respiratory tract. In line with this knowledge, specific bacteria and viruses have been incriminated in the pathogenesis of RA – one of which is EBV. Pratesi and colleagues found that sera from RA patients can react to citrullinated EBV nuclear antigen [86], which suggests previous EBV infection (superantigen) and also the presence of parallel citrullination – which might be induced by chronic smoking, as the amount of citrullinated proteins is increased in the bronchoalveolar lavage of smokers. The role of EBV in the RA pathogenesis is supported by several other data: the anti-EBV titer is elevated in RA patients; certain EBV antigens share similarities with synovial self-autoantigens providing the possibility of viral cross-reactivity; the gp110 glycoprotein in EBV contains a copy of SE; cell-mediated responses against EBV proteins were found in the synovial fluid of RA patients; and EBNA-1 can undergo citrullination, and the virus can induce antibody formation against citrullinated proteins [87].
Another explanation for the primary steps towards RA might be bacterial/viral cross-reactivity with autoantigens, as in the case of EBV. On the one hand, Porphyromonas gingivalis causing periodontitis has a functional PAD enzyme, which is quite similar to human PADs, and subsequently the infection may stimulate antibody production against the human PADs as well [88]. The incidence of periodontitis is elevated due to smoking [89], so the body might be exposed to a more increased burden of P. gingivalis causing a constant antigen trigger compared with nonsmokers. Autoantibodies against PAD4 enzymes are specific markers of RA, they exist in about 40% of RA patients, and they account for more severe disease course. Polymorphisms in the PADI4 gene (only in certain populations) may influence the immune response to PAD4 enzyme, potentially contributing to disease propagation [90]. It is also reported that PADs can autodeiminate themselves, due to which the structure of the molecule might be changed profoundly and new epitopes may arise. Furthermore, the modified citrullinated proteins and PAD may create an altered molecule complex like the tissue transglutaminase and deaminated gluten in celiac disease, which may result in autoimmune reaction in genetically prone subjects.
On the other hand, emerging data suggest there might be a connection between RA and Proteus mirabilis. These data are supported by the following observations. There is an increased incidence of urinary tract infections (especially P. mirabilis) in RA patients [91]. Furthermore, Ebringer and Rashid have found sequence homology between certain HLA alleles associated with RA and hemolysins of P. mirabilis. They also identified another homology between type XI collagen and Proteus urease enzyme, yet they have failed to show common motifs between P. urease and RA-targeted synovial structures even though active RA patients have elevated IgG and IgM antibodies against Proteus [91]. Consequently, due to infections, cross-reactivity might arise against auto-structures of joints.
Similarly, CD19+ B cells capable of secreting antibodies reactive to type II collagen are present in both RA patients and in healthy subjects. In RA patients, however, the cells accumulate in the inflamed joints, suggesting that they have been activated due to certain factors (possibly superantigens or cross-reactivity) [92].
It is known that synovitis in general, also in nonautoimmune rheumatic diseases, is marked by citrullinated proteins, although the presence of ACPAs is specific for RA, and is likely to be the result of many tolerance-breaking immune steps. Neeli and colleagues have found that LPS-induced neutrophils can produce marked citrullination of histones, which then can be identified as the components of extracellular chromatin traps [93]. Bacterial invasion can provide the perfect background for neutrophil activation and subsequent release of highly autoantigenic citrullinated histones in the respiratory tract.
Besides common environmental factors, intrapersonal and interpersonal psychological factors may also contribute to RA pathogenesis. To support this hypothesis, RA patients with an elevated daily stress level (daily hassles, interpersonal conflicts) have poorer outcome and more erosions, while major stress (major negative life events) might ameliorate the disease course. Long-lasting (chronic) minor stress may lead to proinflammatory responses via short-lived surges of hormones and neurotransmitters, yet major stress might lead to massive, long-lived release of stress-axe mediators of the hypothalamic–pituitary–adrenal axis (norepinephrine, cortisol, and so forth), resulting in anti-inflammatory responses [94]. Smoking might sustain a constant minor stress in the body via its addictive nature, and subsequently may lead to neurohumoral immunomodulation.
To summarize, if the constellation of genetic factors – for example, HLA-DRB1 as it has a higher affinity to bind citrullinated form of proteins [95], and perhaps other loci in different populations such as the North American population – and of environmental factors – smoking, concomitant infections (cross-reactivity, molecular mimicry) and also general stressors (psychological as well) – is created, there is a possibility for autoimmune disease development.
As citrullination is considered one of the crucial steps in the development of RA, and also as ACPAs seem to be involved in the progression of RA, new pharmaceutical agents targeting PADs have been investigated: PAD inhibitors including F-amidine (the most potent known inhibitor), paclitaxel and 2-chloroacetamidine [40]. Their clinical utilization is a little controversial, however, as ACPAs can appear several years prior to the development of RA, and at the time when healthcare professionals are able to interfere with the pathological processes of their patients, the vicious circle of the autoimmune process has already started, and may be sustained by factors other than citrullination. Moreover, we know little about the physiological functions of PADs – so their inhibition may involve serious disturbances in the cells, such as apoptosis [96].
Conclusion and future directions
The connection of smoking, anti-citrullinated antibodies and RA is unambiguously proven by several studies and reports. Consequently, it is essential to inform patients about the hazardous role of smoking in the development and progression of RA. Moreover, as the autoimmune diseases in general cause accelerated atherosclerosis due to constant inflammation, and increase the cardiovascular risk, it is important for patients to understand smoking cessation is required as much as taking disease-modifying antirheumatic drugs or biologics to achieve remission and better life quality.
Although we have an effective therapeutic repertoire for RA, we cannot reverse the developed joint deformity in advanced stages, so the initiation of the early treatment prior to bone and joint damage has great importance. To achieve this early initiation, we need to better understand the pathogenesis of the disease and the interaction of risk factors, and also to develop better diagnostic tools on the basis of this information.
Westhoff G, Rau R, Zink A: Rheumatoid arthritis patients who smoke have a higher need for DMARDs and feel worse, but they do not have more joint damage than non-smokers of the same serological group. Rheumatology (Oxford). 2008, 47: 849-854. | Rheumatoid arthritis and smoking: putting the pieces together
Abstract
Besides atherosclerosis and lung cancer, smoking is considered to play a major role in the pathogenesis of autoimmune diseases. It has long been known that there is a connection between rheumatoid factor-positive rheumatoid arthritis and cigarette smoking. Recently, an important gene–environment interaction has been revealed; that is, carrying specific HLA-DRB1 alleles encoding the shared epitope and smoking establish a significant risk for anti-citrullinated protein antibody-positive rheumatoid arthritis. We summarize how smoking-related alteration of the cytokine balance, the increased risk of infections (the possibility of cross-reactivity) and modifications of autoantigens by citrullination may contribute to the development of rheumatoid arthritis.
Introduction
It has long been known that there is a connection between seropositive rheumatoid arthritis (RA) and smoking. The exact underlying mechanism, however, has only been speculated.
Cigarette smoking is one of the major environmental factors suggested to play a crucial role in the development of several diseases. Disorders affecting the great portion of the population, such as atherosclerosis, lung cancer or cardiovascular diseases, are highly associated with tobacco consumption. More recently, it has been reported that smoking is involved in the pathogenesis of certain autoimmune diseases such as RA, systemic lupus erythematosus, systemic sclerosis, multiple sclerosis and Crohn's disease.
Firstly, Vessey and colleagues described an association between hospitalization due to RA and cigarette smoking, which was an unexpected finding of their gynecological study [1]. Since then several population-wide case–control and cohort studies have been carried out [2]. For example, a population-based case–control study in Norfolk, England showed that ever smoking was associated with a higher risk of developing RA [3]. | yes |
Rheumatoid | Can smoking cause Rheumatoid Arthritis? | yes_statement | "smoking" can "cause" rheumatoid arthritis.. the act of "smoking" can lead to the development of rheumatoid arthritis. | https://synapse.koreamed.org/articles/1122079 | Smoking as a Preventable Risk Factor for Rheumatoid Arthritis ... | Abstract
Rheumatoid arthritis (RA) is a chronic inflammatory disease of multifactorial etiology. Smoking is considered one of the most established environmental risk factors for RA development and severity. A large proportion of patients with RA have a high prevalence of smoking history. Previous studies have provided evidence suggesting that smoking is associated with the development of RA. Smoking has been associated with several pathogenic mechanisms on RA development such as oxidative stress, inflammation, and epigenetic changes. There is a need for public health campaigns to educate the public regarding these risks and preventive measures that reduce smoking are essential and may result in a decline in RA incidence. Encouragement of smoking cessation is especially warranted in relatives of patients with RA. Recently, RA-specific smoking cessation interventions have been developed. This review will summarize the knowledge accumulated to date concerning associations between smoking and RA.
INTRODUCTION
Rheumatoid arthritis (RA) is an autoimmune-mediated inflammatory disease that affects 0.5%~1% of the overall population [1]. RA results from the interaction between genetic constitution and environmental triggers. The disease has been classified into two major subsets based on the presence or absence of anti-citrullinated protein antibodies (ACPA) [234].
Our understanding of the pathogenesis of RA has progressed over the past two decades thanks to epidemiologic and translational studies. Specifically, epidemiologic investigations of smoking and risk of RA have allowed the construction of a paradigm for RA development and generated novel hypotheses that have been tested in translational studies to further understand the biology of RA pathogenesis. Many environmental factors have been associated with an increased risk of developing RA, but to date smoking is the only environmental risk factor that has been extensively studied and is widely accepted. The association of smoking with RA led investigators to initially consider the lung as a site of RA pathogenesis. Recent studies have revealed a link between smoking and inflammatory arthritis. Indeed, the particular association of smoking with RA seropositive for ACPA led investigators to consider citrullination as an essential biologic process in RA pathogenesis. The identification of abnormalities in lung structure and local autoantibody production further support this hypothesis [5]. Cigarette smoking has also been associated with inflammatory joint symptoms in unaffected first-degree relatives of RA patients [6]. Smoking may play an important role in different phases pathogenesis RA before the manifestation of clinical symptoms.
Smoking is associated with an increased risk of developing seropositive RA (rheumatoid factor [RF] and/or ACPA positivity). Recent studies have shown that smoking can influence disease phenotype, with the development of more aggressive disease and more severe joint damage; but other studies have reported contradictory results. Recent data have also suggested that smokers respond less favorably to antirheumatic therapy [7]. This review will address smoking as a risk factor for RA, the controversies relative to the effects of tobacco on RA, the role of nicotine in RA pathogenesis, and the benefits of smoking cessation in patient with RA.
The available literature including meta-analysis, reviews, randomized controlled trials, systematic reviews, clinical trials, and case reports written in English and published within the past 10 years was reviewed. A bibliography review of the terms rheumatoid arthritis, smoking, nicotine, tobacco and smoking cessation was performed using the PubMed database. A total of 924 publications were identified, and following the assessment of each study, 64 articles were selected that were related with the main theme of this review. This study was exempted from Hanyang University Hospital Institutional Review Board evaluation in 2018.
MAIN SUBJECTS
Smoking as a risk factor for rheumatoid arthritis
The risk of developing RA in smokers is known to be double that of non-smokers. Previous epidemiological studies have identified smoking as an important risk factor for RA [489101112131415]. Numerous environmental factors have been associated with an increased risk of developing RA, but to date tobacco smoking has been the most important environmental risk factor to be extensively studied and widely accepted. Multiple studies have reported odds ratios (OR) of association between smoking and RA of 2 or greater, with estimates that exposure to smoking accounts for 20%~30% of the environmental risk for RA [16]. Exposure to cigarette smoking was first linked to RA over 30 years ago [15]. Smoking is now recognized as the most established environmental risk factor for the development of RA [17]. Several studies have indicated that male smokers have a higher risk of developing RA than females [101214], whereas others have demonstrated that female smokers have a higher risk of RA [111315]. A meta-analysis concluded that lifelong cigarette smoking was positively associated with the risk of RA even among smokers with a low lifelong exposure levels [9]. In addition, a few large-scale epidemiologic studies have also provided support for smoking being a stronger risk factor of RA in men than in women [1718]; this might indicate that there are sex-related differences in the effects of smoking or that women have different risk factors for the development of RA.
Smoking is associated with ACPA-positive cases of RA [19]. ACPA-positivity correlates with disease activity. A previous study found that smoking is responsible for 35% of ACPA-positive cases in a dose-dependent manner (55% in patients with 2 copies of shared epitope [SE] alleles) [20]. Several previously published studies have reported the association of the SE and smoking with anti- cyclic citrullinated peptide (CCP) negative as well as anti-CCP positive RA. In case-control Korean study, smoking was shown to be associated with anti-CCP-positive RA (OR 2.22) and anti-CCP-negative RA (OR 2.80). Smoking also increased RA susceptibility in individuals with SE alleles, regardless of their anti-CCP antibody [21]. In the absence of SE, smoking conferred risk for anti-CCP negative subsets [22]. Association of smoking with CCP-negative RA has also been reported in a Caucasian population.
The risk of RA in ex-smokers diminished with time. The risk of RA rose with increasing duration of smoking. Moreover, the risk of developing ACPA-positive RA diminished with time after cessation of smoking. Lifelong cigarette smoking was positively associated with the risk of RA even among smokers with a low lifelong exposure. The risk of RA did not further increase with an exposure higher than 20 pack-years [9].
Importantly, there are several features of the relationship between smoking and RA that may mediate the increased risk of developing RA. As indicated above, smoking is most strongly associated with ACPA-positive RA, and especially ACPA-positive RA in the setting of a SE [16]. Smoking is known as a risk factor of RA, especially in RF-positive RA men and heavy smokers [13]. Furthermore, smoking has long been associated with the presence of RF even in the absence of RA [23]. This suggests that there may be biological interactions between factors that drive RA development, or at the very least RA-related autoimmunity. As a relevant example, it has been proposed that smoking may lead to increased citrullination, which, within the context of the genetic background, may lead to increased levels of citrullinated proteins and the generation of ACPA, although other local and systemic effects of smoking may also influence immunity [192324].
A recent Swedish population-based case-control study [25] demonstrated that smoking increases the risk of both subsets of RA with a more pronounced influence on the risk of ACPA-positive RA than in the ACPA-negative subset. For both subsets, there seemed to be a threshold (2.5 pack-years for ACPA-positive RA and 5 pack-years for ACPA-negative RA) below which, no association between smoking and RA occurred. A dose-response association was observed between cumulative dose of smoking and risk of developing-ACPA positive RA. Thus, duration of smoking had a higher influence on the association between smoking and RA than did the intensity of smoking. The Epidemiological Investigation of Rheumatoid Arthritis (EIRA) study from Sweden found that the interaction between smoking and silica exposure regarding ACPA-positive RA among males depended on the cumulative dose of smoking. The additive interaction effect between these two exposures might require over 10 years to disappear [26].
Furthermore, because smoking has also been associated with increased disease activity, its actions may go beyond the initiation of RA [27]. Thus, a major unanswered question regarding the role of smoking in RA is where it acts in the natural history of RA. Specifically, does exposure to cigarette smoking act to trigger the initial autoimmunity or does it drive the propagation of autoimmunity to the point of disease? Data from twin studies in Sweden have suggested that smoking possibly affects after the initial generation of RA-related autoimmunity and may be related to prolonged “high-dose/intensity” smoking, as measured by pack-years, although other studies suggest that it is more the duration of smoking rather than intensity that imparts risk for RA [2829].
Pathogenic effects of smoking on rheumatoid arthritis
Although the exact pathogenic effects of smoking on RA still remain uncertain, several mechanisms have been proposed to better understand how tobacco smoking plays a role in RA [73031]. Smoking can increase oxidative stress, which increases in rheumatoid inflammation due to impairment of antioxidative mechanisms caused by free radicals contained in tar and vapor of smoke and that have been implicated in the etiology of RA [3233]. Smoking acts on both cellular and humoral aspects of the immune system to cause a systemic proinflammatory state [34]. The effects of tobacco smoking on immune system are to trigger morphological, physiological, biochemical, and enzymatic changes that lead to impaired inflammatory responses [35]. Smoking is known to increase levels of proinflammatory cytokines, interleukin (IL)-17, which is an important contributor to RA pathogenesis and chronicity [36]. Epigenetic changes including DNA methylation has been explored as these may play an important role in gene regulation and development of RA [37]. Tobacco smoking has been reported to lead to extensive genome-wide changes in DNA methylation [38].
There are also systemic effects of smoking, and it is possible that these lead to changes within joints that drive RA. In this regard, in a study of first-degree relatives of patients with RA, joint tenderness and swelling were associated with smoking even in the absence of RA-related autoantibodies, raising the possibility that smoking may have early direct joint effects that could be related to the future development of RA [6]. In addition to baseline erosion, erythrocyte sedimentation rate, and C-reactive protein, current smoking was identified as a strong independent risk factor (adjusted OR=2.17, 95% CI 1.06~4.45) for radiographic progression in early RA [39].
As discussed above, citrullination has been reported to be an important factor for the development of RA in the ACPA-positive subset. An increasing body of evidence has linked chronic inflammatory events in the lungs of smokers to the production of ACPAs and development of RA [40]. Previous meta-analysis suggested a gene-environmental interaction between smoking and SE for the development of ACPA [41].
There is a gene-environment interaction between smoking and the HLA-DRB1 SE genotype in seropositive RA patients [2142]. Smoking interaction with HLA-DRB1 risk alleles increase the specificity, magnitude and diversity of the ACPA response (such as citrullinated α-enolase, fibrinogen, and vimentin peptides, etc., not only CCP). A meta-analysis indicated that both smoking and protein tyrosine phosphatase non-receptor 22 (PTPN22) risk allele were associated with ACPA positivity [43].
Anti-inflammatory effect of nicotine: benefits of nicotine on rheumatoid arthritis
Paradoxically, nicotine has been reported to reduce inflammation, that is, nicotine reduced joint swelling, pain, and bone destruction, and alleviated synovial inflammation through the activation of the cholinergic pathway in a mouse model of RA [44]. Furthermore, nicotine attenuated tumor necrosis factor (TNF)-α induced IL-6 and IL-8 release in fibroblast-like synoviocytes from RA patients [45]. These results support the important immunosuppressive function of nicotine in tobacco smoke. A large prospective French cohort of patients with early RA revealed smoking status had no significant effect on RA disease activity and disability, but did reduce 1-year radiographic disease progression [46]. The anti-inflammatory role of nicotine may explain the lower systemic inflammation and structural disease progression in current smokers with early RA. A Swedish epidemiological study showed that the use of smokeless tobacco (moist snuff) was not associated with the risk of either ACPA-positive or ACPA-negative RA. The increased risk of RA associated with smoking was most likely not due to nicotine [47]. This means that different inhaled constituents of tobacco smoke other than nicotine may be more likely be involved in the pathogenesis of ACPA-positive RA.
Conversely, a recent study explored the effects of nicotine on neutrophil extracellular trap (NET) formation [48]. In this study, using neutrophils of RA patients and collagen-induced mice, the authors demonstrated that nicotine increases NETosis, which leads to increasing levels of NETs and may play a crucial role in accelerating arthritis.
A previous multi-cohort study indicated that the effects of smoking on joint damage were mediated via ACPA and that smoking is not an independent risk factor for radiological progression in RA [49]. In addition, no association was identified between second-hand exposure to tobacco smoke and disease activity in RA [50]. A Swedish cohort study showed that moist snuff was not associated with RA, whereas tobacco smoke was related to an increased risk for RA [51]. In other words, smokeless tobacco does not increase the risk of RA, suggesting that inhaled non-nicotinic components of cigarette smoke are more important than nicotine itself in the etiology of RA.
Of patients enrolled in a large randomized controlled trial of early RA with poor prognostic factors, smoking status did not impact on treatment responses in those receiving early combination or initial methotrexate with step-up therapy at 24 weeks if the disease was still active [52]. Given that smoking may not be a risk factor for the perpetuation of disease activity or progression, smoking cessation cannot be recommended to prevent severe outcome in early RA patients [46]. American and Swedish longitudinal observational studies have reported that smoking cessation after the onset of RA did not have an impact on the poor prognosis of smokers with RA [5354]. However, these results certainly cannot be used to advocate smoking in RA. All RA patients should be encouraged to stop smoking.
Benefits of smoking cessation on disease activity of rheumatoid arthritis
A Swedish cohort study indicated that the risk of RA decreased over time following smoking cessation, nevertheless, when compared to never smokers the risk was still statistically significantly higher [55]. The Swedish EIRA study showed compelling evidence recommending that those with a family history of RA should stop smoking [20]. This study demonstrated that smoking cessation reduced the risk of developing ACPA-positive RA. The effect of smoking cessation in individuals previously diagnosed with RA is unknown. Encouragement of smoking cessation is especially warranted in relatives of patients with RA [56].
Despite the success of TNF-alpha inhibitor (TNFi) treatment in RA, a substantial number of patients necessitate discontinuation. TNFi discontinuation is predicted by current smoking and by the number of previously used biological disease-modifying anti-rheumatic drugs (DMARDs), as well as by pack-years of smoking [57].
As discussed above, the relationship between smoking, ACPA and RA has been demonstrated by several studies and reports. Therefore, it is essential to inform patients of the hazardous role of smoking in the development and progression of RA. Moreover, as autoimmune diseases in general cause accelerated atherosclerosis due to constant inflammation, and increase the risk of cardiovascular disease, it is imperative that patients understand that smoking cessation is as important as therapy with DMARDs or biologics to achieve remission and improve quality of life [58].
For both ACPA-positive and negative subsets of RA, the detrimental effects of smoking decrease after smoking cessation. Twenty years after smoking cessation, there was no longer an association between smoking and risk of ACPA-negative RA, whereas the association between smoking and ACPA-positive RA risk persisted and was dependent on the cumulative dose of smoking [25].
Clinical practices for smoking cessation in patients with rheumatoid arthritis
An international survey of clinical practices regarding smoking cessation revealed that patient recommendations for smoking cessation within rheumatology departments were not homogeneous [59]. These data highlight the need to improve smoking cessation recommendations for patients with RA. The first interventional study in smokers with RA showed that a score of less dependence and previous attempts to quit smoking were significantly associated with definitive smoking cessation at 12 months [60]. The intervention consisted of the following: (1) a baseline visit, which included verbal and written advice by the rheumatologist, emphasizing the practical benefits of smoking cessation; (2) a follow-up visit to the nurse after 3 months for reinforcement and to receive pharmacological treatment to help patients quit smoking. Smoking cessation in RA may lead to reduce the burden of comorbidities. A RA-patient-specific smoking cessation intervention was developed, matching support to specific issues within an individual patient's experience [61]. However, the lack of added benefit of the tailored intervention suggested that brief advice and nicotine replacement therapy (NRT) are currently the best practice for supporting people with RA who wish to quit smoking [62]. This novel RA-specific smoking cessation intervention had 2 very important components, that is, individualized support and recommendations received from the educators [63]. Both components were considered pivotal to the success of the intervention. Although these novel psychosocial intervention approaches provide a variety of widely accepted and useful components, individual participants need to be motivated to give up smoking.
Smokers with RA may have different motivations for, and barriers to, quitting. There are physical limitations and disease-associated factors that may adversely affect smoking cessation in RA patients. Five key barriers to smoking cessation faced by RA patients have been identified in a previous study [64]. First, patients were unaware of the relationship between smoking and RA and therefore did not perceive this as a reason to quit. Second, smoking was used as a distraction from pain. Third, patients found it difficult to exercise and therefore were unable to use exercise as an alternative distraction. Fourth, smoking was used as a coping mechanism for the frustrations of living with RA. Fifth, patients felt unsupported and isolated from other RA non-smoking patients. These barriers and targeted interventions for patients with RA are outlined in detail in Table 1. Moreover, becoming aware of the effects of smoking on arthritis may represent an important motivation to quit smoking and may counter RA-specific barriers to smoking cessation [65].
A recent ongoing randomized controlled trial protocol will examine whether intensive smoking cessation intervention may help smokers with RA to achieve continuous smoking cessation and, secondly, to reduce RA disease activity [66]. The intervention includes individual motivational counselling in combination with tailored NRT. In the motivational counselling sessions, a smoking cessation counsellor's role is to ask, listen and follow the participants cue and to adapt formal information to the participant's motivational stage. Five sessions contain different themes. The first meeting is an introduction to the counselling course and preparation for smoking cessation, including the participant's smoking status and their motivation for cessation. The second meeting aims to prepare the participant for the three first days without smoking. The third meeting aims to help the participant with issues concerning quitting smoking, including risk situations, relapse, reward and social network, and smoking cessation. The fourth meeting includes maintaining motivation, physical activity, handling of stress and mood swings. The fifth (final) meeting includes continuing help with smoking cessation and preparation for the time after the intervention. The NRT is tailored individually according to the Fagerström Test for Nicotine Dependence [67]. Participants are able to choose between the NRT products, including a patch, chewing gum, inhalator or mouth spray. The participants note their tobacco and nicotine replacement consumption in a smoking diary.
CONCLUSION
Taken together, most studies have highlighted the risk of RA associated with smoking. Chronic inflammatory mechanisms active in the lungs of smokers lead to the production of ACPA, which, in turn, drive the development of RA. These mechanistic insights not only reinforce the association between smoking and risk of RA, but also the necessity to increase the level of awareness of those at highest risk. Smoking is one of the most prevalent modifiable risk factors for RA. Previous studies have highlighted that smoking cessation may reduce, though not remove, the risk of RA in women. The clearly increased risk of RA development even among former smokers is another reason to persuade women not to start smoking. Creation of awareness of the associated risks, assessment of smoking status, and implementation of smoking cessation treatment alternatives must be included in the routine clinical management of patients presenting with suspected RA. | Abstract
Rheumatoid arthritis (RA) is a chronic inflammatory disease of multifactorial etiology. Smoking is considered one of the most established environmental risk factors for RA development and severity. A large proportion of patients with RA have a high prevalence of smoking history. Previous studies have provided evidence suggesting that smoking is associated with the development of RA. Smoking has been associated with several pathogenic mechanisms on RA development such as oxidative stress, inflammation, and epigenetic changes. There is a need for public health campaigns to educate the public regarding these risks and preventive measures that reduce smoking are essential and may result in a decline in RA incidence. Encouragement of smoking cessation is especially warranted in relatives of patients with RA. Recently, RA-specific smoking cessation interventions have been developed. This review will summarize the knowledge accumulated to date concerning associations between smoking and RA.
INTRODUCTION
Rheumatoid arthritis (RA) is an autoimmune-mediated inflammatory disease that affects 0.5%~1% of the overall population [1]. RA results from the interaction between genetic constitution and environmental triggers. The disease has been classified into two major subsets based on the presence or absence of anti-citrullinated protein antibodies (ACPA) [234].
Our understanding of the pathogenesis of RA has progressed over the past two decades thanks to epidemiologic and translational studies. Specifically, epidemiologic investigations of smoking and risk of RA have allowed the construction of a paradigm for RA development and generated novel hypotheses that have been tested in translational studies to further understand the biology of RA pathogenesis. Many environmental factors have been associated with an increased risk of developing RA, but to date smoking is the only environmental risk factor that has been extensively studied and is widely accepted. The association of smoking with RA led investigators to initially consider the lung as a site of RA pathogenesis. Recent studies have revealed a link between smoking and inflammatory arthritis. | yes |
Spelaeology | Can stalactites form underwater? | yes_statement | "stalactites" can "form" "underwater".. "underwater" conditions can lead to the formation of "stalactites". | https://www.sciencedaily.com/releases/2017/11/171124114942.htm | Unique underwater stalactites -- ScienceDaily | Unique underwater stalactites
In recent years, researchers have identified a small group of stalactites that appear to have calcified underwater instead of in a dry cave. The Hells Bells in the El Zapote cave near Puerto Morelos on the Yucatán Peninsula are just such formations. Scientists have recently investigated how these bell-shaped, meter-long formations developed, assisted by bacteria and algae.
In recent years, researchers have identified a small group of stalactites that appear to have calcified underwater instead of in a dry cave. The Hells Bells in the El Zapote cave near Puerto Morelos on the Yucatán Peninsula are just such formations. A German-Mexican research team led by Prof. Dr Wolfgang Stinnesbeck from the Institute of Earth Sciences at Heidelberg University recently investigated how these bell-shaped, metre-long formations developed, assisted by bacteria and algae. The results of their research have been published in the journal Palaeogeography, Palaeoclimatology, Palaeoecology.
Hanging speleothems, also called stalactites, result through physicochemical processes in which water high in calcium carbonate dries up. Normally they rejuvenate and form a tip at the lower end from which the drops of water fall to the cave floor. The formations in the El Zapote cave, which are up to two metres long, expand conically downward and are hollow with round, elliptical or horseshoe-shaped cross-sections. Not only are they unique in shape and size, but also their mode of growth, according to Prof. Stinnesbeck. They grow in a lightless environment near the base of a 30 m freshwater unit immediately above a zone of oxygen-depleted and sulfide-rich toxic saltwater. "The local diving community dubbed them Hells Bells, which we think is especially appropriate," states Wolfgang Stinnesbeck. Uranium-thorium dating of the calcium carbonate verifies that these formations must have actually grown underwater, proving that the Hells Bells must have formed in ancient times. Even then the deep regions of the cave had already been submerged for thousands of years.
According to the Heidelberg geoscientist, this underwater world on the Yucatán Peninsula in Mexico represents an enigmatic ecosystem providing the conditions for the formation of the biggest underwater speleothems worldwide. Previously discovered speleothems of this type are much smaller and less conspicuous than the Hells Bells, adds Prof. Stinnesbeck. The researchers suspect that the growth of these hollow structures is tied to the specific physical and biochemical conditions near the halocline, the layer that separates the freshwater from the underlying saltwater. "Microbes involved in the nitrogen cycle, which are still active today, could have played a major role in calcite precipitation because of their ability to increase the pH," explains Dr Stinnesbeck.
July 22, 2020 A cave in a remote part of Mexico was visited by humans around 30,000 years ago - 15,000 years earlier than people were previously thought to have reached the Americas. Excavations of Chiquihuite ...
Jan. 29, 2020 Before they can get started at their field site - a giant cave studded with stalactites, stalagmites and human artifacts -- 15 undergraduate students must figure out how to use their virtual hands ...
Feb. 5, 2019 The discovery of a new to science species of rare and primitive arthropod in a cave that was covered by a thick ice sheet until recently is certain to raise questions. Researchers describe a new ... | Unique underwater stalactites
In recent years, researchers have identified a small group of stalactites that appear to have calcified underwater instead of in a dry cave. The Hells Bells in the El Zapote cave near Puerto Morelos on the Yucatán Peninsula are just such formations. Scientists have recently investigated how these bell-shaped, meter-long formations developed, assisted by bacteria and algae.
In recent years, researchers have identified a small group of stalactites that appear to have calcified underwater instead of in a dry cave. The Hells Bells in the El Zapote cave near Puerto Morelos on the Yucatán Peninsula are just such formations. A German-Mexican research team led by Prof. Dr Wolfgang Stinnesbeck from the Institute of Earth Sciences at Heidelberg University recently investigated how these bell-shaped, metre-long formations developed, assisted by bacteria and algae. The results of their research have been published in the journal Palaeogeography, Palaeoclimatology, Palaeoecology.
Hanging speleothems, also called stalactites, result through physicochemical processes in which water high in calcium carbonate dries up. Normally they rejuvenate and form a tip at the lower end from which the drops of water fall to the cave floor. The formations in the El Zapote cave, which are up to two metres long, expand conically downward and are hollow with round, elliptical or horseshoe-shaped cross-sections. Not only are they unique in shape and size, but also their mode of growth, according to Prof. Stinnesbeck. They grow in a lightless environment near the base of a 30 m freshwater unit immediately above a zone of oxygen-depleted and sulfide-rich toxic saltwater. "The local diving community dubbed them Hells Bells, which we think is especially appropriate," states Wolfgang Stinnesbeck. Uranium-thorium dating of the calcium carbonate verifies that these formations must have actually grown underwater, proving that the Hells Bells must have formed in ancient times. | yes |
Spelaeology | Can stalactites form underwater? | yes_statement | "stalactites" can "form" "underwater".. "underwater" conditions can lead to the formation of "stalactites". | https://www.livescience.com/stalagmites-and-stalactites | How are stalactites and stalagmites formed? | Live Science | Stalactites and stalagmites decorate caves the world over. Stalactites hang down from the ceiling, while stalagmites rise up from the ground. They grow incredibly slowly, and some are so ancient that they predate modern humans, Live Science previously reported.
These tooth-like rock formations grow when dripping water comes into contact with the cave air, according to the National Park Service website. The water carries dissolved minerals, picked up on its journey from Earth's surface. As it passes through the cave, it leaves tiny traces of those minerals behind, building each stalactite drip by drip.
What shape are stalactites?
Most stalactites are cone-shaped: thick at the top and tapered to a point at the bottom. But some are hollow. Shaped like straws, these stalactites grow when water trickles down their centre. As each drip evaporates, it leaves another shell of minerals at the bottom of the tube.
Cave straws are incredibly fragile and often crumble at the slightest touch, making them a rare find in well-trodden caves, according to the Journal of Cave and Karst Studies.
Some straw-shaped stalactites seem to defy gravity. Known as helictites, these structures have twists, spurs, and knobbles that tilt off in all directions. Scientists aren't sure exactly how they form, but they think it might be down to a combination of capillary action and wind, according to the Universities Space Research Association.
Slight changes in the air currents through a cave, or in the orientation of the crystals in a growing stalactite, can draw tiny water droplets off in new directions. Rather than dripping towards the floor under the force of gravity, they travel sideways or even upwards, leaving their minerals behind as they go.
What do stalactites and stalagmites contain?
Each drop of water contains dissolved limestone particles. They harden when they hit air. (Image credit: Getty)
Most of the stalactites you see in caves are made from calcium carbonate, according to the Royal Society of Chemistry. It forms two main types of crystals: calcite and aragonite. They have the chemical formula CaCO3.
For this reason, stalactites only tend to appear in caves where the surrounding rocks contain calcium in the form of limestone or dolomite.
Stalactites can also carry traces of other chemicals, which give them different colors and textures. These chemicals include carbonates, sulphides, and even opal.
Limestone caves often contain stalagmites as well as stalactites. These structures grow on the floor, with a thick base and a point that looks up towards the cave ceiling. Some are flat like fried eggs, while others are long and thin, like broomsticks, according to the Encyclopedia of Caves (Third Edition, 2019).
Stalagmites often grow directly beneath stalactites, mopping up any minerals from water droplets that splash down onto the cave floor. However, the two types of cave decoration don't always come in pairs: either one can appear on its own.
Inside a cave
Other cave features
Limestone caves can also contain other kinds of cave decoration. According to the journal Transactions of the Royal Society of South Africa, stalactites and stalagmites are both types of dripstones, named because they form from dripping water. But you might also see flowstones and cave popcorn.
Flowstones appear when water comes down a cave wall in sheets, according to Yorkshire Dales National Park. They look like curtains of stalactites, hanging together like a waterfall frozen in time. Sometimes flowstones contain layers of color from the minerals left behind by the water, earning them the name 'cave bacon', according to the American Geophysical Union (AGU).
Cave popcorn forms where water comes through pores in the rock, forming bumps and lumps that look like berries.
The bumps on the wall of this cave are called cave popcorn. (Image credit: Getty)
The chemistry of limestone stalactites
Stalactites and stalagmites form when rainwater drips through limestone rock. Along the way, it picks up carbon dioxide, from the air and from any organic matter it passes as it dribbles down, according to the National Park Service. The carbon dioxide reacts with the water to make a weak acid called carbonic acid. This acid can dissolve limestone, reacting with the mineral calcite and drawing it into the water as calcium bicarbonate.
As the water drips into the cave, it comes into contact with the air again. There, it lets go of the carbon dioxide, and the calcium comes out of solution, forming rock-hard calcite again.
How they grow
Strange stalactites
Did you know, stalagmites and stalactites aren't always found in caves? You can see them under concrete buildings, in lava tubes, and even hanging off the side of your garage in the winter. This is because stalagmites and stalactites aren't always made from limestone.
Ice stalactites are probably the most common type of stalactite. They form when it's cold enough for water to freeze, but sunny enough for it to melt again, according to a 2019 article in the Encyclopaedia of Caves. The melted water trickles towards the ground and re-freezes before it hits the floor.
Scientists from the University of Cambridge and the University of Arizona used photographs, maths, and physics to work out why icicles are pointy. They noticed that, as the water trickles down, it blends together to form a sheet. That sheet gives off heat, making a warm pocket of air around the icicle. The warm air rises up, which means that water freezes faster at the bottom of the icicle, making them grow long and thin at the tip.
Nohoch Nah Chich, Mexico– These stalactites and stalagmites form part of the longest underwater cave system in the world. (Image credit: Getty)
Another kind of stalactite you might see in your everyday life is a concrete stalactite, according to the Royal Society of Chemistry. You can find them in car parks and even on the pipes in your home. Concrete contains calcium oxide, which dissolves when alkaline liquid passes through it. When that liquid hits the air, the calcium comes out of solution, forming a hard substance called calthemite. If the drips come fast enough, calthemite stalagmites can start to form too.
Other strange stalactites are a bit harder to find. Lava stalactites form inside tunnels called lava tubes, which carry molten rock beneath the Earth, according to the International Journal of Speleology. When the roof of a lava tube starts to cool, it gets a skin, a bit like a bowl of custard. Underneath, hot gases keep expanding, pushing on the skin and stretching it out to form hollow tubes that harden into solid rock.
Live Science newsletter
Stay up to date on the latest science news by signing up for our Essentials newsletter.
Contact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsorsBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.
Laura Mears is a biologist who left the confines of the lab for the rigours of an office desk as a keen science writer and a full-time software engineer. Laura has previously written for the magazines How It Works and T3. Laura's main interests include science, technology and video games. | The carbon dioxide reacts with the water to make a weak acid called carbonic acid. This acid can dissolve limestone, reacting with the mineral calcite and drawing it into the water as calcium bicarbonate.
As the water drips into the cave, it comes into contact with the air again. There, it lets go of the carbon dioxide, and the calcium comes out of solution, forming rock-hard calcite again.
How they grow
Strange stalactites
Did you know, stalagmites and stalactites aren't always found in caves? You can see them under concrete buildings, in lava tubes, and even hanging off the side of your garage in the winter. This is because stalagmites and stalactites aren't always made from limestone.
Ice stalactites are probably the most common type of stalactite. They form when it's cold enough for water to freeze, but sunny enough for it to melt again, according to a 2019 article in the Encyclopaedia of Caves. The melted water trickles towards the ground and re-freezes before it hits the floor.
Scientists from the University of Cambridge and the University of Arizona used photographs, maths, and physics to work out why icicles are pointy. They noticed that, as the water trickles down, it blends together to form a sheet. That sheet gives off heat, making a warm pocket of air around the icicle. The warm air rises up, which means that water freezes faster at the bottom of the icicle, making them grow long and thin at the tip.
Nohoch Nah Chich, Mexico– These stalactites and stalagmites form part of the longest underwater cave system in the world. (Image credit: Getty)
Another kind of stalactite you might see in your everyday life is a concrete stalactite, according to the Royal Society of Chemistry. You can find them in car parks and even on the pipes in your home. Concrete contains calcium oxide, which dissolves when alkaline liquid passes through it. | yes |
Spelaeology | Can stalactites form underwater? | yes_statement | "stalactites" can "form" "underwater".. "underwater" conditions can lead to the formation of "stalactites". | https://www.uni-heidelberg.de/presse/news2017/pm20171124_unique_underwater_stalactites.html | Heidelberg Researchers Study Unique Underwater Stalactites | Heidelberg Researchers Study Unique Underwater Stalactites
Current investigations show how the Hells Bells on the Yucatán Peninsula formed
Source: E.A.N./IPA/INAH/MUDE/UNAM/HEIDELBERG
The Hells Bells in the El Zapote cave near Puerto Morelos on the Yucatán Peninsula.
In recent years, researchers have identified a small group of stalactites that appear to have calcified underwater instead of in a dry cave. The Hells Bells in the El Zapote cave near Puerto Morelos on the Yucatán Peninsula are just such formations. A German-Mexican research team led by Prof. Dr Wolfgang Stinnesbeck from the Institute of Earth Sciences at Heidelberg University recently investigated how these bell-shaped, metre-long formations developed, assisted by bacteria and algae. The results of their research have been published in the journal “Palaeogeography, Palaeoclimatology, Palaeoecology”.
Hanging speleothems, also called stalactites, result through physicochemical processes in which water high in calcium carbonate dries up. Normally they rejuvenate and form a tip at the lower end from which the drops of water fall to the cave floor. The formations in the El Zapote cave, which are up to two metres long, expand conically downward and are hollow with round, elliptical or horseshoe-shaped cross-sections. Not only are they unique in shape and size, but also their mode of growth, according to Prof. Stinnesbeck. They grow in a lightless environment near the base of a 30 m freshwater unit immediately above a zone of oxygen-depleted and sulfide-rich toxic saltwater. “The local diving community dubbed them Hells Bells, which we think is especially appropriate,” states Wolfgang Stinnesbeck. Uranium-thorium dating of the calcium carbonate verifies that these formations must have actually grown underwater, proving that the Hells Bells must have formed in ancient times. Even then the deep regions of the cave had already been submerged for thousands of years.
According to the Heidelberg geoscientist, this underwater world on the Yucatán Peninsula in Mexico represents an enigmatic ecosystem providing the conditions for the formation of the biggest underwater speleothems worldwide. Previously discovered speleothems of this type are much smaller and less conspicuous than the Hells Bells, adds Prof. Stinnesbeck. The researchers suspect that the growth of these hollow structures is tied to the specific physical and biochemical conditions near the halocline, the layer that separates the freshwater from the underlying saltwater. “Microbes involved in the nitrogen cycle, which are still active today, could have played a major role in calcite precipitation because of their ability to increase the pH,” explains Dr Stinnesbeck. | Heidelberg Researchers Study Unique Underwater Stalactites
Current investigations show how the Hells Bells on the Yucatán Peninsula formed
Source: E.A.N./IPA/INAH/MUDE/UNAM/HEIDELBERG
The Hells Bells in the El Zapote cave near Puerto Morelos on the Yucatán Peninsula.
In recent years, researchers have identified a small group of stalactites that appear to have calcified underwater instead of in a dry cave. The Hells Bells in the El Zapote cave near Puerto Morelos on the Yucatán Peninsula are just such formations. A German-Mexican research team led by Prof. Dr Wolfgang Stinnesbeck from the Institute of Earth Sciences at Heidelberg University recently investigated how these bell-shaped, metre-long formations developed, assisted by bacteria and algae. The results of their research have been published in the journal “Palaeogeography, Palaeoclimatology, Palaeoecology”.
Hanging speleothems, also called stalactites, result through physicochemical processes in which water high in calcium carbonate dries up. Normally they rejuvenate and form a tip at the lower end from which the drops of water fall to the cave floor. The formations in the El Zapote cave, which are up to two metres long, expand conically downward and are hollow with round, elliptical or horseshoe-shaped cross-sections. Not only are they unique in shape and size, but also their mode of growth, according to Prof. Stinnesbeck. They grow in a lightless environment near the base of a 30 m freshwater unit immediately above a zone of oxygen-depleted and sulfide-rich toxic saltwater. “The local diving community dubbed them Hells Bells, which we think is especially appropriate,” states Wolfgang Stinnesbeck. Uranium-thorium dating of the calcium carbonate verifies that these formations must have actually grown underwater, proving that the Hells Bells must have formed in ancient times. Even then the deep regions of the cave had already been submerged for thousands of years.
| yes |
Spelaeology | Can stalactites form underwater? | yes_statement | "stalactites" can "form" "underwater".. "underwater" conditions can lead to the formation of "stalactites". | https://blog.mares.com/hells-bells-unique-underwater-stalactites-in-yucatan-caves-6187.html | Hells Bells: Unique underwater stalactites in Yucatán Caves • Mares ... | In recent years, scientists have identified a small group of stalactites in which the characteristic calcification process does not occur in a dry environment, but underwater. An example of these formations is the Hells Bells in the El Zapote cave near Puerto Morelos on the Yucatán Peninsula.
Yucatan has a very special landscape. In the course of Earth's history, the limestone soil on the Mexican peninsula eroded and collapsed. The resultant cenotes became filled with rainwater and today represent the entrances to gigantic cave systems. According to Mayan beliefs, these limestone pits are the entrance to Xibalbá, the underworld of the Mayans. The name ‘Xibalbá’ means ‘place of fear’ in English. At that time, the people made animal and human sacrifices to the gods in some of the caves here; the bones of which can still be found here.
Today, Professor Dr. Wolfgang Stinnesbeck from the Institute of Geosciences of the University of Heidelberg and his German-Mexican research team has now explored something in this fantastic underworld which should not be possible: stalactites that have emerged underwater.
The team of researchers has analysed how these bell-shaped and metre-long structures formed with the involvement of bacteria and algae. The results of this research were published in the journal Palaeogeography, Palaeoclimatology, Palaeoecology.
Hanging speleothems, also called stalactites, are formed during physico-chemical processes in which calcareous water dry up. They usually rejuvenate and form a tip at their lower end from which the drops of water fall on the floor of the cave. The formations in the El Zapote Cave, which are up to two metres long, open conically, are hollow and have round, elliptical or horseshoe-shaped cross-sections. However, it is not only their shape and size that are unique, but the conditions of their growth as well, said Professor Stinnesbeck. They are formed in a completely lightless environment at the base of a 30-metre-thick freshwater unit, which is located immediately above an oxygen-free zone containing sulphide-containing toxic saltwater. "The local diving community dubbed them Hells Bells, which we think is especially appropriate," said Stinnesbeck.
The fact that these formations were formed underwater has been proven by uranium-thorium dating of the calcareous structures. They prove that the Hells Bells have been forming since historical times. The deep areas of the cave had been flooded for thousands of years.
As the Heidelberg geoscientist explained, this underwater world on the Yucatán Peninsula in Mexico represents an enigmatic ecosystem in which the largest underwater speleothems known today could form. According to Professor Stinnesbeck, previously discovered speleothems of this kind are much smaller and less obvious than the Hells Bells. The researchers speculate that the development of Hells Bells is linked to specific physical and biochemical conditions near the halocline. This refers to the layer in the water column that separates the freshwater from the heavier saltwater below it. "Microbes involved in the nitrogen cycle, which are still active today, could have played a major role in calcite precipitation because of their ability to increase the pH," Professor Stinnesbeck said. | The team of researchers has analysed how these bell-shaped and metre-long structures formed with the involvement of bacteria and algae. The results of this research were published in the journal Palaeogeography, Palaeoclimatology, Palaeoecology.
Hanging speleothems, also called stalactites, are formed during physico-chemical processes in which calcareous water dry up. They usually rejuvenate and form a tip at their lower end from which the drops of water fall on the floor of the cave. The formations in the El Zapote Cave, which are up to two metres long, open conically, are hollow and have round, elliptical or horseshoe-shaped cross-sections. However, it is not only their shape and size that are unique, but the conditions of their growth as well, said Professor Stinnesbeck. They are formed in a completely lightless environment at the base of a 30-metre-thick freshwater unit, which is located immediately above an oxygen-free zone containing sulphide-containing toxic saltwater. "The local diving community dubbed them Hells Bells, which we think is especially appropriate," said Stinnesbeck.
The fact that these formations were formed underwater has been proven by uranium-thorium dating of the calcareous structures. They prove that the Hells Bells have been forming since historical times. The deep areas of the cave had been flooded for thousands of years.
As the Heidelberg geoscientist explained, this underwater world on the Yucatán Peninsula in Mexico represents an enigmatic ecosystem in which the largest underwater speleothems known today could form. According to Professor Stinnesbeck, previously discovered speleothems of this kind are much smaller and less obvious than the Hells Bells. The researchers speculate that the development of Hells Bells is linked to specific physical and biochemical conditions near the halocline. This refers to the layer in the water column that separates the freshwater from the heavier saltwater below it. | yes |
Spelaeology | Can stalactites form underwater? | yes_statement | "stalactites" can "form" "underwater".. "underwater" conditions can lead to the formation of "stalactites". | https://www.geologypage.com/2017/11/unique-underwater-stalactites.html | Unique underwater stalactites | Geology Page | Unique underwater stalactites
The Hells Bells in the El Zapote cave near Puerto Morelos on the Yucatán Peninsula. Credit: E.A.N./IPA/INAH/MUDE/UNAM/HEIDELBERG
In recent years, researchers have identified a small group of stalactites that appear to have calcified underwater instead of in a dry cave. The Hells Bells in the El Zapote cave near Puerto Morelos on the Yucatán Peninsula are just such formations. A German-Mexican research team led by Prof. Dr Wolfgang Stinnesbeck from the Institute of Earth Sciences at Heidelberg University recently investigated how these bell-shaped, metre-long formations developed, assisted by bacteria and algae. The results of their research have been published in the journal Palaeogeography, Palaeoclimatology, Palaeoecology.
Hanging speleothems, also called stalactites, result through physicochemical processes in which water high in calcium carbonate dries up. Normally they rejuvenate and form a tip at the lower end from which the drops of water fall to the cave floor. The formations in the El Zapote cave, which are up to two metres long, expand conically downward and are hollow with round, elliptical or horseshoe-shaped cross-sections. Not only are they unique in shape and size, but also their mode of growth, according to Prof. Stinnesbeck. They grow in a lightless environment near the base of a 30 m freshwater unit immediately above a zone of oxygen-depleted and sulfide-rich toxic saltwater. “The local diving community dubbed them Hells Bells, which we think is especially appropriate,” states Wolfgang Stinnesbeck. Uranium-thorium dating of the calcium carbonate verifies that these formations must have actually grown underwater, proving that the Hells Bells must have formed in ancient times. Even then the deep regions of the cave had already been submerged for thousands of years.
According to the Heidelberg geoscientist, this underwater world on the Yucatán Peninsula in Mexico represents an enigmatic ecosystem providing the conditions for the formation of the biggest underwater speleothems worldwide. Previously discovered speleothems of this type are much smaller and less conspicuous than the Hells Bells, adds Prof. Stinnesbeck. The researchers suspect that the growth of these hollow structures is tied to the specific physical and biochemical conditions near the halocline, the layer that separates the freshwater from the underlying saltwater. “Microbes involved in the nitrogen cycle, which are still active today, could have played a major role in calcite precipitation because of their ability to increase the pH,” explains Dr Stinnesbeck. | Unique underwater stalactites
The Hells Bells in the El Zapote cave near Puerto Morelos on the Yucatán Peninsula. Credit: E.A.N./IPA/INAH/MUDE/UNAM/HEIDELBERG
In recent years, researchers have identified a small group of stalactites that appear to have calcified underwater instead of in a dry cave. The Hells Bells in the El Zapote cave near Puerto Morelos on the Yucatán Peninsula are just such formations. A German-Mexican research team led by Prof. Dr Wolfgang Stinnesbeck from the Institute of Earth Sciences at Heidelberg University recently investigated how these bell-shaped, metre-long formations developed, assisted by bacteria and algae. The results of their research have been published in the journal Palaeogeography, Palaeoclimatology, Palaeoecology.
Hanging speleothems, also called stalactites, result through physicochemical processes in which water high in calcium carbonate dries up. Normally they rejuvenate and form a tip at the lower end from which the drops of water fall to the cave floor. The formations in the El Zapote cave, which are up to two metres long, expand conically downward and are hollow with round, elliptical or horseshoe-shaped cross-sections. Not only are they unique in shape and size, but also their mode of growth, according to Prof. Stinnesbeck. They grow in a lightless environment near the base of a 30 m freshwater unit immediately above a zone of oxygen-depleted and sulfide-rich toxic saltwater. “The local diving community dubbed them Hells Bells, which we think is especially appropriate,” states Wolfgang Stinnesbeck. Uranium-thorium dating of the calcium carbonate verifies that these formations must have actually grown underwater, proving that the Hells Bells must have formed in ancient times. Even then the deep regions of the cave had already been submerged for thousands of years.
| yes |
Spelaeology | Can stalactites form underwater? | yes_statement | "stalactites" can "form" "underwater".. "underwater" conditions can lead to the formation of "stalactites". | https://www.sciencedirect.com/topics/earth-and-planetary-sciences/stalactite | Stalactite - an overview | ScienceDirect Topics | Abstract
Stalactites and stalagmites are the most common speleothems. Stalactites are centimeter to meter in scale, hang from the ceiling and grow toward the cave floor. Stalagmites grow from the cave floor upward and are commonly fed by water dripping from an overhead stalactite. The most common variety of stalactites is the tubular soda straw, which is characterized by a central hollow tube and a translucent wall structure. Stalagmite morphologies are mostly determined by the drip rate, with candle-shaped stalagmites fed by relatively slow drips and dome-shaped stalagmites fed by fast drips. These speleothems archive Earth's climate changes from the Palaeozoic to the Present.
Introduction
Stalactites and stalagmites are the most common speleothems, the morphology of which is basically controlled by dripping; therefore, both speleothems can be considered as gravitational forms. Stalactites are centimeter to meter in scale, hanging from the ceiling and growing toward the cave floor. Stalagmites grow from the cave floor upward and are commonly fed by water dripping from an overhead stalactite (Fig. 1). Stalagmitic flowstones are a particular type of stalagmite formed by a thin flowing film of water itself fed by groups of dripping stalactites, and coat the cave floor and walls. When a stalagmite and the overhanging stalactite merge, they form a column (Fig. 1). Most stalactites and stalagmites are composed of calcite, a few of aragonite, the rhombohedral and orthorhombic phases of calcium carbonate (CaCO3), respectively. Rare stalactites and stalagmites consisting of huntite (a Mg-carbonate), halite (NaCl), gypsum (CaSO4·2H2O), and even opal (amorphous hydrated SiO2) have been found.
Figure 1. Stalactites, both soda straws and cone stalactites, candle-shaped stalagmites, columns (stalactites and stalagmites merged), and stalagmitic flowstone coating the cave floor. The plastic containers host glasses onto which in situ calcite precipitation experiments have been carried out to determine the processes that influence the development of different crystals.
Stalactites and stalagmites likely started to develop in caves when the first carbonate rocks had been subaerially exposed and eroded well over 1 billion years ago. Most speleothems that have been extensively studied date from the Quaternary, and the genesis of these is commonly driven by the process of degassing, which occurs when drip waters having a high carbon dioxide concentration (pCO2) interact with the cave atmosphere that has a relatively low pCO2. It is, therefore, believed that occurrence of stalagmites and stalactites greatly increased since the rise of vascular plants in the Devonian, which led to an acceleration of chemical weathering, greater availability of soil CO2, and a decline in global atmospheric CO2 concentration (Alonso-Zarza and Tanner, 2010). Chemical weathering of Ca-bearing silicate minerals by acidic waters generated in peat soil, for example, is very effective in yielding calcite stalactites and stalagmites in caves cut into granite and gneiss; in such cases karst dissolution does not play a role in the genesis of these speleothems, but the presence of vascular plants does. The focus of the following sections is, therefore, on the genesis, structure, and chemical properties of calcium carbonate stalagmites and stalactites since the Devonian.
4.3.2.2 Ice Speleothems (Stalagmites, Stalactites, and Columns)
Ice stalagmites and stalactites are common occurrences in cave entrances during the cold season in mid-to-high-latitude and/or altitude caves. They form as drip water freezes and usually melt as the temperature of the water feeding them becomes positive. In caves with perennial ice, these stalagmites can be perennial, the negative temperatures preserved throughout the year in the vicinity of underground glaciers helping their survival. However, differences exist between stalagmites, on one hand, and stalactites and columns, on the other, as the latter melt earlier and usually completely, the warm water leading to their detachment from the ceiling and collapse.
The dynamics of these ice formations have been studied in great detail in Scărișoara Ice Cave, Romania (SIC), by Viehmann and Racoviţă (1968), Racoviţă et al. (1987), and Racoviţă (1994), and it is summarized below, as a “template” for other caves, as well.
In Scărișoara Ice Cave, the dynamics of ice speleothems were studied separately for stalagmites, ice massifs, and ice crusts on the cave floor (Fig. 4.3.6), on both annual and subannual time scales.
Fig. 4.3.6. Seasonal variations (10 years average) of the upper face of the ice block in SIC (A), ice massifs (B), ice stalagmites (C), and floor ice crust (D). The ice massifs are similar to the one visible on the left in Fig. 4.3.1.
For all these speleothems, a maximum was reached between March and June, with the larger ice bodies having a delayed onset of melting. Ice crusts of the floor (Fig. 4.3.6D) grow outwards from the edges of the ice block, thinning with increased distance, their dynamics being controlled by the inflow of cold air which sweeps the upper surface of the ice block and leads to water freezing. However, this genetic mechanism renders them extremely vulnerable to melting, a process which initiates once the inflow of cold air ceases in early spring (Figs. 4.3.2 and 4.3.6D). The melting of the larger ice forms, both stalagmites and ice massifs (Fig. 4.3.6B and C), is delayed by the larger thermal inertia of the ice (Perșoiu et al., 2011). However, regardless of their size and shape, all ice forms in caves (including the large ice blocks) reach a minimum at the end of the melting season, just before the onset of freezing, usually in November (Figs. 4.3.2 and 4.3.6). Both growing and melting of ice is a complex process, controlled by the variable interplay between air temperature and precipitation amount and distribution, thus there is no clear correlation with either of these two (Perșoiu et al., 2011). The effect of precipitation amounts reaching ice caves on ice dynamics is strongly dependent of air temperature during winter, with water input superimposed on below 0°C conditions leading to rapid ice build-up, while the same input occurring during periods with positive temperature anomalies resulting in ice loss. However, in summer, temperature doesn't play too important a role, as the latent heat of the ice prevents temperature inside the caves reaching above 0°C. Thus, water input is the main factor leading to ice ablation, the heat delivered to the ice by inflowing warm water being the main factor in the ablation of ice.
A peculiar type of subannual ice dynamics of ice is that of the “thermoindicator” speleothems (Viehmann and Racoviţă, 1968, Fig. 4.3.7). These form in winter, during periods of alternating cold and mild weather, with the translucent, bulky sections developing when temperatures are between 0°C and − 2°C, and drip water will freeze slowly, expelling gas and calcite impurities. When temperatures drop below − 3°C, dripping water will tend to freeze quickly, incorporating both air bubbles and in situ precipitated cryogenic cave calcite (Žák et al., 2008), hence the semiopaque appearance of the ice.
Conclusions
Radiometric dates of stalagmites, stalactites, and flowstones are a valuable tool to establish upper limits of sea-level position through time. Establishing the timing of speleothem growth helps to set bounds on the timing of cave emergence above sea level. Dates bounding erosional unconformities in speleothems formed during seawater submergence provide further constraints on the timing of sea-level highstands and the elevation of the speleothem in the cave provides an important limit on the position of sea level at that time. Organisms that encrust or bore into cave deposits can be studied to provide a complementary age on the timing of cave submergence. In particular environments, it is also possible to capitalize on the occurrence of phreatic overgrowths that form at the sea surface to establish the timing and position of sea level. In this last case, instead of providing a bracketing age, that is an age that represents a minimum or maximum estimate on the timing of sea level passing through the elevation of the cave, the age of the phreatic overgrowth represents the exact time at which sea level was at that position. In all cases, these age constraints provide important benchmarks to test existing models and reconstructions of sea level through time.
Our understanding of the timing and magnitude of sea-level changes can be greatly enhanced by further study of both emergent and submerged caves. In particular, submerged caves have the potential to deliver rare information on the timing of sea-level lowstands, and the timing of sea-level changes during glacial periods in addition to providing bracketing ages for sea-level highstands. Existing data from submerged speleothems and associated cave deposits are scarce due to practical difficulties entailed in locating and recovering samples from underwater caves. In fact, this raises an ethical question of how this line of research should be pursued in light of the risk involved with cave diving to collect these valuable specimens. Perhaps there is scope to conduct future explorations or even sampling of submerged cave systems with remotely operated underwater vehicles.
Alternatively, one could also turn to uplifted areas to access speleothems in caves that were once submerged. Coral terraces exposed by rapid tectonic uplift have been exploited to access this portion of the record at localities such as Barbados and Huon Peninsula, Papua New Guinea (Chappell and Polach, 1991; Fairbanks, 1989). Unfortunately, there remain few examples of radiometrically dated speleothems from similarly uplifted environments to constrain former sea-level oscillations. Cave exploration in such environments certainly has the potential to yield an important and largely untapped resource for additional constraints on past sea-level change.
26.5.1.4 Forms and Dynamics of Ice
Congelation (ice of lake, stalactites) and sublimation ice are in the cave. The temperature in the cave during the summer does not exceed 3°C, and ice and hoarfrost in the heart of the cave do not melt throughout almost all of the summer. Even in the summer, the lake is sometimes covered with ice.
Although in the summer of 1980, the air temperature in the caves of the Ichalkovsky pine forest was negative (ranging from − 0.40°C to 6°C), now because of the winters' warming air temperatures, the air temperature during the summer months is positive, which has led to a decrease in the amount of permanent ice in the caves. Some years the presence of ice in the caves is seasonal.
Subaerial Helictites
Helictites are elongated speleothems that, unlike stalactites, may grow in any direction. Upward-growing helictites have sometimes been called heligmites, but there is little logical basis for this distinction, because helictites do not occur as separate “up” and “down” forms. Helictites may be straight, smoothly curving, or even spiral (helical, the root meaning of helictite), but in most cases they twist and turn erratically. Accordingly, the alternative names erratics, eccentrics, or eccentric stalactites have been used by some authors. Helictites are usually composed of calcite or aragonite, more rarely other minerals. They occur in a great range of sizes, from hair-thin and a fraction of a centimeter long to several centimeters wide and more than a meter in length. All share the common characteristic of a narrow central canal of capillary size.
Because helictites are so conspicuous and unusual in appearance, many (often fanciful) theories have been proposed for their origin. By growing artificial helictites of sodium thiosulfate, Huff (1940) demonstrated that hydrostatic pressure feeding capillary flow was the true mechanism. In natural carbonate helictites, the tip is extended by deposition of calcium carbonate around the central pore as the outflowing moisture evaporates or loses carbon dioxide. Moore (1954) subsequently explained helictite curvature by a combination of effects of impurities, crystallographic-axis rotation, and stacking of wedge-shaped crystals. These factors take precedence over gravity because the rate of flow is too slow to form a hanging drop at the tip. Increased flow can cause helictites to convert to soda-straw stalactites, and decreased flow, vice versa.
Varieties including filiform (hair-like), vermiform (worm-like), and antler` (forking) helictites have been defined on the basis of shape and size. Aragonite helictites may be beaded (Fig. 1), consisting of a string of conical beads of radiating fibrous crystals. The larger ends of the cones may face either the attached end of the helictite or the free end, but the orientation is usually consistent within each chain. This bizarre-looking beaded structure is relatively rare and has never been explained.
Subaerial helictites
Helictites are elongated speleothems that, unlike stalactites, may grow in any direction (Fig. 1). The upward-growing helictites have sometimes been called heligmites, but there is little logical basis for this distinction, because helictites do not occur as separate “up” and “down” forms. Helictites may be straight, smoothly curving, or even spiral (helical, the root meaning of helictite), but in most cases, they twist and turn erratically. Accordingly, the alternative names erratics, eccentrics, or eccentric stalactites have been used by some authors. Helictites are usually composed of calcite or aragonite and very rarely other minerals. They occur in a great range of sizes, from hair-thin and a fraction of a centimeter long to several centimeters wide and more than a meter in length. All share the common characteristic of a narrow central canal of capillary size.
Fig. 1. Dense growth of helictites on cave ceiling.
(Photograph courtesy of Kevin Downey.)
Because helictites are so conspicuous and unusual in appearance, many (often fanciful) theories have been proposed for their origin. By growing artificial helictites of sodium thiosulfate, Huff (1940) demonstrated that hydrostatic pressure feeding capillary flow was the true mechanism. In natural carbonate helictites, the tip is extended by deposition of calcium carbonate around the central pore as the outflowing moisture evaporates or loses carbon dioxide. Moore (1954) subsequently explained helictite curvature by a combination of effects of impurities, crystallographic-axis rotation, and stacking of wedge-shaped crystals. These factors take precedence over gravity because the rate of flow is too slow to form a hanging drop at the tip. Increased flow can cause helictites to convert to soda-straw stalactites and decreased flow, vice versa.
Varieties including filiform (hairlike), vermiform (wormlike), and antler (forking) helictites have been defined on the basis of shape and size. Aragonite helictites may be beaded (Fig. 2), consisting of a string of conical beads of radiating fibrous crystals. The larger ends of the cones may face either the attached end of the helictite or the free end, but the orientation is usually consistent within each chain. This bizarre-looking beaded structure is relatively rare and has never been explained.
Karstification
The process of karstification is responsible for the formation of stalactites, stalagmites, columns, and other speleothems by accumulation of dissolved material sourced by the rock dissolved (Fig. 8). Degassing is also involved in the formation of speleothems as water filters through the rocks and enters the cave environment that causes a release of CO2 from the water that results in calcium carbonate precipitation. This process does not happen in flooded caves.
Fig. 8. Speleothem formation.
Younger (lotic) cenotes are interconnected with the groundwater through fractures and dissolution features, while in the older (lentic) cenotes water could be stagnant due to sedimentation and blocking of those fractures (Schmitter-Soto et al., 2002). The latter ones are often turbid, thermally, and chemically stratified, oxygen rich in the fresh water body and oxygen depleted in organic debris rich horizons, either on the cave floor or in the density-stratified halocline.
The ring of cenotes (Fig. 9) consists of a semicircular alignment of cenotes with a diameter of 180 km and with an average altitude of 13 m. It is a geological feature that functions as a source of water and as nodes in the underground river system that channels water toward the coast (Marín et al., 2000; Perry et al., 1989; Steinich and Marín, 1997; Steinich and Marin, 1996; Perez-Ceballos et al., 2012). It extends southeast from Celestun to Dzilam in the northeast forming a large arc across the middle of the state of (Alcocer et al., 1999). Numerous productive and domestic activities take place around the RC in the absence of wastewater treatment or sewage systems (Arcega-Cabrera et al., 2014).
Fig. 9. Cenotes in the “ring of cenotes” and the rest of the Yucatan Peninsula.
Mammoth Cave minerals
Mammoth Cave is not noted for extensive displays of speleothems, such as stalactites, stalagmites, columns, and flowstone. These depositional formations are confined to parts of the cave system where the clastic capping bedrock is thin or has been removed and carbonate-saturated ground water can descend. The Frozen Niagara section, shown to visitors, contains a significant display of these speleothems (Fig. 7).
Various evaporative minerals can be seen in parts of the cave where the clastic caprock above remains intact. These include gypsum crystals, needles, cotton, flowers, massive crusts, and loose deposits resembling drifted snow (Fig. 8). Less common are epsomite and mirabilite crystals. The origin of the sulfate is pyrite in the overlying beds as indicated by recent isotope studies. Aborigines gathered and used mirabilite as a laxative and perhaps as food seasoning, gypsum as a token of manhood or possible body paint ingredient, and selenite crystals as ceremonial objects.
Abstract
Speleothems are mineral formations occurring in limestone caves, most commonly as stalagmites and stalactites or slablike deposits known as flowstones. Stalactites (which hang from the ceilings of caves) often have a hollow core, with growth occurring around this central orifice, whereas stalagmites are solid and grow incrementally at the drip site. Thus, stalagmites are generally selected for paleoclimatic analysis. The extensive distribution of karst landscapes means that studies can be undertaken on a worldwide basis. Speleothems are primarily composed of calcium carbonate, precipitated from groundwater that has percolated through the adjacent carbonate host rock. Certain trace elements may also be present (often giving the deposit a characteristic color), and one of these, uranium, can be used to determine the age of a speleothem, as discussed in the succeeding text. Seasonal variations in the trace element composition of dripwaters may also be used to identify annual layers. Deposition of a speleothems results from evaporation of water or degassing of carbon dioxide from water droplets. Evaporation is normally only an important process near cave entrances; most speleothems from deep within caves therefore result from the degassing process. Water that has percolated through soil and been in contact with decaying organic matter usually accrues a partial pressure of carbon dioxide exceeding that of the cave atmosphere. Thus, when water enters the cave, degassing of carbon dioxide occurs, causing the water to become supersaturated with calcite, which is thus precipitated. | Stalagmitic flowstones are a particular type of stalagmite formed by a thin flowing film of water itself fed by groups of dripping stalactites, and coat the cave floor and walls. When a stalagmite and the overhanging stalactite merge, they form a column (Fig. 1). Most stalactites and stalagmites are composed of calcite, a few of aragonite, the rhombohedral and orthorhombic phases of calcium carbonate (CaCO3), respectively. Rare stalactites and stalagmites consisting of huntite (a Mg-carbonate), halite (NaCl), gypsum (CaSO4·2H2O), and even opal (amorphous hydrated SiO2) have been found.
Figure 1. Stalactites, both soda straws and cone stalactites, candle-shaped stalagmites, columns (stalactites and stalagmites merged), and stalagmitic flowstone coating the cave floor. The plastic containers host glasses onto which in situ calcite precipitation experiments have been carried out to determine the processes that influence the development of different crystals.
Stalactites and stalagmites likely started to develop in caves when the first carbonate rocks had been subaerially exposed and eroded well over 1 billion years ago. Most speleothems that have been extensively studied date from the Quaternary, and the genesis of these is commonly driven by the process of degassing, which occurs when drip waters having a high carbon dioxide concentration (pCO2) interact with the cave atmosphere that has a relatively low pCO2. It is, therefore, believed that occurrence of stalagmites and stalactites greatly increased since the rise of vascular plants in the Devonian, which led to an acceleration of chemical weathering, greater availability of soil CO2, and a decline in global atmospheric CO2 concentration (Alonso-Zarza and Tanner, 2010). | no |
Spelaeology | Can stalactites form underwater? | yes_statement | "stalactites" can "form" "underwater".. "underwater" conditions can lead to the formation of "stalactites". | https://www.cambridge.org/core/journals/journal-of-glaciology/article/formation-and-morphology-of-ice-stalactites-observed-under-deforming-lead-ice/29021BE4AF13B35ED2F46E59F65DA630 | The formation and morphology of ice stalactites observed under ... | Abstract
During the LeadEx main field experiment, held in April 1992 in the Alaskan Beaufort Sea, a number of large ice stalactites were observed growing under young lead ice. Formation of the stalactites was associated with rafting of the thin, highly saline ice. The rafting caused the brine to drain rapidly from the ice at a temperature well below the freezing point of the surrounding water, which in turn caused ice to form in a hollow cylinder around the brine plume. Within a 15 h period after the rafting event, the stalactites, which were located approximately 10 m apart in a line along the upwind edge of a 150 m wide lead, had grown to a length of 2 m. A detailed structural analysis of the upper part of one of these stalactites revealed that the interior channel, down which the brine flowed, was bounded by a zone of frazil ice that developed into a shell of columnar ice. The growth of the columnar ice was directed radially outward and the c axes of these crystals were oriented perpendicular to their growth direction. Development of the stalactites illustrates the impact ice deformation can have on the process of brine rejection in freezing leads and potentially on the thermohaline structure of the upper ocean in the immediate vicinity of the lead.
Introduction
When sea ice grows, salts are rejected into the underlying water as a highly saline brine. Introduction of this brine causes an increase in both the salinity and density of the water and. consequently, the development of an unstable buoyancy flux. In the case of a freezing lead, the buoyancy flux is concentrated in a narrow band, intensifying the effects of this instability on the thermo-haline structure of the upper ocean. Models developed to assess the oceanographic impact of the process of brine rejection in freezing leads have lix used on the effects of lead width and the ice water velocity difference (Reference Kozo,Koza,:1983Reference Smith, and Morison,Smith and Morison. 1993). The surface boundary condition at the ice water interface, which describes the input of brine is typically represented as a salt flux right at the surface or as a distributed source which decreases exponentially from the surface, reaching 1/e of its surface value 2 m below the ice water interface (Reference Morison,, McPhee,, Curtin, and Paulson,Morison and others. 1992). In both cases, the salt flux is assumed to be horizontally uniform, reflecting an assumption that the process of brine rejection from the ice does not vary across the lead.
While studying brine rejection from freezing leads during the LeadEx main field experiment, we came to realize that ice deformation could have an impact on the process of brine rejection both by affecting the rate of brine rejection and by localizing the input of brine to the ocean LeadEx Group, 1993). The most dramatic illustration of this came serendipitously, when we had the opportunity to observe and recover a part of a 2 m long ice stalactite that formed under the lead ice after a rafting event.
Reference Martin,Martin (1974) successfully grew under-ice stalactites in the laboratory and. based on that work, developed a detailed description of the mechanisms associated with their growth. Briefly, sea ice consists of pockets of brine trapped in a fresh-ice matrix (Reference Weeks,, Ackley, and Untersteiner,Weeks and Ackley. 1986). Influenced by a variety of desalination processes, this brine drains down through the ice and forms a system of tubes or brine channels. All the ice water interface, the brine in these channels enters the sea water. Under freezing conditions, the draining brine is colder and saltier than the sea water, both of which are at their salinity determined freezing points. The colder brine rapidly gains heat by freezing water around itself If the flow of brine is sufficient and sustained, this process continues and a stalactite forms around the plume of draining brine. Since salts cannot be correspondingly conducted through the walls of the Stalactite, the concentration of the brine neat the inner wall of the stalactite increases above the liquidus curve. To reestablish equilibrium, ice along the interior wall melts, cooling and diluting the adjacent brine. The source of heat for the interior melting is provided by the growth ice at the outer wall. As a result of this process, the stalactite also becomes thicker as it grows in length.
Since the growth of large under-ice stalactites marks the injection of a Considerable amount of cold, dense brine into the water column, interest in the process of their formation extends beyond simple curiosity. For instance, Reference Dayton, and S,Dayton and Martin (1971) first commented on their potential impact on upper ocean mixing. In this paper, we suggest that the formation of stalactites is an indication of the role deformation may play in the desalination process of the ice sheet and also speculate on potential oceanographic implications associated with their development. Our comments are based on field observations made while the stalactites were growing and on a detailed structural analysis of a part of one stalactite. The Latter adds a new perspective on the formation and growth process, since no previous reports have included data on the crystal structure of these ice features. Physical-property data from the retrieved stalactite are used to provide estimates of the brine flux associated with the development of the stalactites, which are then compared to the rate of brine rejection from undeformed lead ice.
Observations
The LeadEx main experiment, held in April 1992 in the Alaskan Beaufort Sea, was an interdisciplinary program designed to study ocean-ice-atmosphere interactions at freezing leads, where there are huge fluxes of heat from the ocean to the atmosphere and of salt from the ice to the ocean. A major scientific focus of this program was studying the input of salt to the upper ocean from brine rejected from the freezing lead ice. Salt fluxes were estimated from detailed CTD casts in the upper ocean and from time-series measurements of the salinity of the lead ice. As part of the oceanographic- program, we operated an Autonomous Conductivity Temperature Vehicle (ACTV) to measure salinity and temperature under leads. A small remotely operated vehicle (ROV). equipped with a television camera and special manipulator claw, was used to observe ACTV operations and provide back-up recovery. In the course of these operations at the edge of a 1 d old, 150 m wide lead, we observed under-ice stalactites. After monitoring the stalactites over an 8 h period, we used the ROV manipulator arm to retrieve one of the Stalactites for a detailed structural analysis.
Formation and growth
In the early afternoon of 11 April, we began scientific operations at the upwind edge of a newly opened lead. By that evening, the lead ice was a few centimeters thick. ROV observations made on the night of 11 April showed numerous small stalactites under the lead ice. These stalactites were approximately 0.1 m in length and appeared to be distributed uniformly on the bottom of the lead ice.
Between 0400 and 0600 h on the morning of 12 April, extensive rafting occurred along the up-wind edge of the lead: 0.07 m thick young ice was pushed onto the adjacent 0.90 m thick first-year sea-ice sheet. When the event was over, the young ice covered the thicker ice over an area 3.5 m wide. The skeletal layer of this thin ice was sheared off during the rafting, reducing its thickness to 0.04 m. During an ROV run about 8 h later 1300 h), we observed an array of large ice stalactites growing under the thin lead ice in a line parallel to the edge of the adjacent thick ice (Fig. 1). The line of stalactites was located about 1 m from the thick ice edge, at the junction of the lilted, rafted ice and the level, undeformed lead ice. The stalactites were spaced approximately 5-10 m apart along this line. Figure 2 is a schematic showing the geometry of the rafted ice and Stalactites. At the time of this first observation, the stalactites were about 1 m long and 0.06 m in diameter, and consisted of large. Hal. loosely connected ice crystals. They had little structural integrity and would disintegrate when touched by the ROV manipulator arm. This frailty is evidenced in Figure 1 by the almost transparent appearance of the stalactite in the foreground.
Fig. 1. Underwater photograph of ice stalactites taken at 1300 h on 12 April 1992, using the ROV video camera. The translucent appearance of the meter-long stalactite in the foreground is line In its being composed of a loose collection of large, platy crystals with high porosity. Note in the background how the long stalactites lie in a line.
8 h later (2100 h the stalactites had grown as large as 2 m in length and 0.1 m in diameter. They extended below the bottom of the thick ice and exhibited a slight curvature in the direction of the current. The ice in the upper meter of the stalactites had consolidated considerably, to the point where it was possible to use the ROV to retrieve a stalactite. During this retrieval, the- still-fragile bottom part of the stalactite was lost but we were able to recover the top 0.75 m.
Fig. 2. Schematic illustrating the geometry of the rafted ice and stalactites.
As mentioned, the formation of an under-ice stalactite requires a plume of cold, dense brine (Reference Martin,Martin. 1974). Measurements made on ice cores taken soon after the rafting event indicated that the temperature of the ice and. hence, the draining brine was approximately −10° C This was well below the freezing point of the surrounding water. −1.7° C resulting in the rapid growth of ice Crystals around the brine plume. Also critical to the development of the larger stalactites is an adequate volume flux of brine. We believe that the recently rafted ice was the source of the requisite brine. The widely dispersed brine that was rejected from the thin, salty, undeformed lead ice is only adequate to create the smaller stalactites initially observed beneath the undeformed ice (Reference Dayton, and S,Dayton and Martin. 1971). Once this young ice was rafted, however, cooling of the ice and an increase in elevation resulted in a more rapid, and apparently more localized, drainage of brine.
Morphology
Since the upper part of the stalactite had consolidated, we were able to retrieve it. which provided a unique opportunity to investigate its morphology. A visual inspection in the field (Fig.3) indicated that, while the Outside wall of the stalactite had some small bumps and bends, the outside diameter 0.1 m was remarkably Constant along its length. One of the most immediately striking aspects of the stalactite was the tortuosity of the internal brine channel. It was not a simple cylindrical channel down the middle of die stalactite but rather a complex, twisting passageway that changed size, shifted position and often split into multiple channels along the length of the stalactite.
The stalactite, packaged with dry ice. was shipped back to the U.S. Army Cold Regions Research and Engineering Laboratory for a quantitative analysis of its properties. From samples taken at 0.02 m intervals along the length of the core-ice, salinities were measured and horizontal thin sections were prepared. Measured ice Salinities ranged from 7 to 10PPt but we believe that the actual in situ values were significantly higher, as a considerable amount of brine drainage occurred when the stalactite was retrieved. The thin sections were photographed under polarized light to identify the ice-crystal structure (Reference Tucker,, Gow, and Weeks,Tucker and others. 1987). A personal computer-based image-processing system (Reference Perovich, and A,Perovich and Hirai. 1988) was then used to determine die ice area, the brine-channel area and the relative amounts of frazil and columnar ice for each of the sections.
Figure 4a shows vertical profiles of the cross-sectional area of the brine channels and the ice. The cross-sectional area of the brine channels varies between 3 and 9 cm2 for die lop 0.25 m, then increases gradually with depth to values typically in the 7-15 cm2 range. The cross-sectional area of the ice averages 60 cm2, with fluctuations of ±10 cm2 throughout the length of the stalactite. This indicates that, in the mature part of the stalactite, the outside diameter and the ice volume do not vary significantly with depth. using the photographs taken between crossed polarizers, we determined the relative areas of frazil and columnar ice in each thin section. As Figure 4b indicates, the sections were typically composed of 30% frazil and 70% columnar ice with fluctuations of ±10%. There was no significant variation in the relative amounts of frazil and columnar ice along the length of the stalactite.
The internal structure of the stalactite is illustrated in the horizontal thin-section photographs (Fig. 5). The four thin sections were localed at a (a)0.22 m. (b) 0.28 m, (c) 0.22 m and (d) 0.46 m along the stalactite. The section at 0.22 m (Fig. 5a) shows the most basic structure observed and is representative of roughly half of the retrieved Stalactite. It consists of a single, round brine channel near the center immediately surrounded by an inner zone of frazil ice and then by a ring of columnar ice. Usually, the collection of frazil ice was to one side of the brine channel. Figure 5b and c illustrates how quickly the size and shape of the inner brine channel can change. Over a distance of only 0.02 m, the size of the brine channel was reduced by a factor of 6 and the shape changed from a large triangle to a small circle. As Figure 5d indicates, not only can the size and shape of the brine channel change but the number of channels can also vary. In this thin section, the brine channel had divided into five separate subchannels. The variability in the size, shape and number of brine channels, along with the presence of frazil-ice crystals, indicates that the fluid flow inside the Stalactite was convectively unstable. This is consistent with (Reference Martin,Martin 1974) suggestion that convective instabilities develop inside the stalactite, clue to variations in the density of the brine, and cause overturning.
Fig. 3. Photograph of the ice stalactite taken immediately after retrieval. Though there are some bumps and curves in the stalactite, the outer diameter is fairly constant along its length.
The columnar ice that forms the outside wall of the stalactite is nucleated from the frazil-ice crystals that form its core. Once initiated, the columnar crystals usually extend all the way to the outside wall of the stalactite. A crystal-fabric analysis of the thin sections showed that the growth of the columnar ice was directed radially outward, with the c axes of the crystals lying in the horizontal plane perpendicular to the growth direction, which is the same relative crystal orientation as columnar ice found in a sea-ice sheet. For the stalactite, the sheet it is directed radially outward and for a sea-ice sheet it is directed upward.
Discussion
As mentioned earlier, we believe that the subsequent brine drainage of rafted ice provided the source of brine needed to form the stalactite. For this to be true, there must have been a sufficient quantity of brine available in the young lead ice. No direct measurements were made of the volume of brine expelled from the stalactite. while a brine plume draining from the stalactite was observed in the ROY video, we were unable to measure the plume temperature, salinity or flow rate.
It is possible, however, to formulate rough estimates of the heat content of the stalactite and of the volume of rejected brine that served as the necessary heat sink. To first order, the heat content (Q) of the stalactite is simply Q = WLf, where W is the weight of the ice in the stalactite and Lf is the latent heat of fusion of ice (0.33 MJ kg−1). The weight of the retrieved part of the stalactite was 3.1 kg. We can only estimate the weight of the part that was lost. In the video, it appeared to be about 0,75 m long and 0.08 m in diameter. The ice content is not known but from its lack of cohesiveness we shall assume that the lower part was 25% ice and 75% brine. This gives an ice weight for the lower part of approximately 0.9 kg and a total stalactite weight of 4.0 kg. Thus, the heat content of the entire brine stalactite is roughly 1.35 MJ. The volume of rejected brine (V) needed to extract the heat content of the stalactite is
(1)
where β is the fraction of the heat extracted from the brine that goes to freezing the stalactite, ρb is the density of the rejected brine, c is the specific heat of the brine (4.2 kJkg−1 ° C−1), Tb is the temperature of the brine and T0 is the temperature of the underlying sea water (T0 ~ −1.7° C). According to Martin’s laboratory experiments, a minimum of half of the heat extracted by the cold brine contributes to the formation of the walls of the stalactite (β = 0.5). The remaining heat is. lost to ice crystals that grow in the vicinity of the tip of the stalactite but are swept out to the underlying ocean. The density of the brine plume was determined from ice physical-property measurements, which showed that the lead ice had a bulk salinity of 18 ppt and a mean temperature of −10° C. Assuming that the brine was at its salinity-determined freezing point and had a temperature of −10° C, then the brine salinity was 150 ppt (Reference Fujino,, Lewis, and Perkln,Fujino and others, 1974) and the density was 1120 kgm−3 (Reference Gebhart, and Mollendorf,Gebhart and Mollendorf, 1977). Substituting these values into Equation (1) gives an estimate of 701 for the volume of brine that flowed through the stalactite. Based on these approximations, then, for a flow period of 15 h (0600-2100 h), the average How rate was 1.3 mls−1. This is an order of magnitude less than the value of 18 mls−1 estimated by Reference Dayton, and S,Dayton and Martin (1971) for Antarctic stalactites.
Fig. 4. lats off (a). the Cross-sectional area of ice and brine channels along the length of the stalactite and (b.) the relative amounts of frazil and columnar ice in the stalactite. zero depth corresponds to the base of the stalactite.
Could the rafted ice supply 70 1 of brine to the stalactite? As described earlier, the underwater video indicated that the stalactites were spaced roughly 5-10 m apart and were located 1 m from the edge of the thick ice. We also observed that the strip of young lead ice that had rafted on to the thick ice along the length of the lead was 3.5 m wide and 0.04 m thick (Fig. 2). From this, we assumed a stalactite “drainage basin” that was 4.5 m wide, 5-10 m long and 0.04 m thick, for a total ice volume of 0.9-1.8 m3. Ice with a temperature of −10° C and a salinity of 18 ppt has a brine volume of 10% (Reference Frankenstein, and Gardner,Frankenstein and Gardner, 1967). Therefore, the total amount of brine in the “drainage basin” is 90-1801, implying that 40-80% of this brine drained through the stalactite. While not definitive proof that the rafted ice was the brine source for the stalactite, it is certainly supporting evidence.
The likelihood that deformed ice was the brine source for the stalactite raises an intriguing question: what role does deformation play in the desalination of young lead ice and, consequently, the input of salt to the ocean? Conventional thinking regarding desalination has been directed towards an ice growth only scenario (Reference Morison,, McPhee,, Curtin, and Paulson,Morison and Others, 1992). In the simplest case under these conditions, the salt flux due to brine rejected from undeformed ice is distributed uniformly in space, and the rate of rejection varies smoothly with time, reaching a maximum within the first few hours of growth, then decreasing as the ice-growth rate decreases. The observations described in this paper, however, suggest that deformation of the young ice may considerably complicate this picture and could cause the salt flux to be more variable, in both time and space.
While we do not have the detailed observations necessary to assess fully the impact of dynamics on ice desalination, we can gain some perspective on the role of deformation in the desalination process by comparing the amount of salt rejected during formation of the stalactite to that which occurred in an undeformed ice sheet. This is done by keeping in mind that there is considerable natural variability in the rate and mechanisms of ice desalination, and results from a comparison of two cases are by no means definitive. Data for the undeformed case come from measurements made a few days earlier at a different lead, where there was little deformation. Meteorological conditions at both lead sites were similar during the initial phase of ice growth: skies were clear, the wind was light and the average daily air temperature was −21° C. Ice cores were periodically removed from the growing lead ice and analyzed for temperature, salinity and structure. Following the methodology of Reference Gow,, Meese,, Perovich, and Tucker,Gow and others (1990), time-series measurements of bulk-ice salinity (Sb) and ice thickness (H) can be used to estimate the desalination rate and the amount of salt rejected per unit area (Sr) from the ice to the ocean. The desalination rate is simply the change in the bulk salinity of the ice per unit time (dSb/dt). The amount of salt rejected is defined by
where ρ is the density of the sea ice (920 kgm−3) and Sw is the salinity of the water (30 ppt). We can then determine the salt flux (sf) by taking the time derivative of the salt rejected (Sf = dsr/dt). In order to make this calculation for the deformed ice, we recall our earlier estimates that approximately 40-80% of the brine in the rafted ice drained through the stalactite, and realize that additional brine may have drained from the rafted ice but not through the stalactite. Based on this, we will assume a conservative estimate of 60% total brine loss over the described “drainage basin”, in effect reducing the bulk salinity of the rafted ice from 18 to 7 ppt over a 15 h period.
Table 1 summarizes ice thickness, bulk salinity, ice-desalination rate, the total salt rejected and the salt flux from the ice for the deformed and undeformed cases. As expected, the desalination rate in the undeformed ice reached a maximum during the first few hours, when growth rates were large, and then decreased as the rate of growth slowed, In the deformed ice, even using the conservative estimate of brine loss, there was a high desalination rate of 17.6 pptd−1. This value was the average desalination rate for the entire period we observed the stalactite. In all likelihood, the rate was higher when the ice first rafted, then decreased with time. Aside from the initial few hours of growth, when the rates were comparable, desalination was always greater in the deformed ice than in the undeformed. This illustrates that lifting highly saline ice out of the water is an effective desalination mechanism.
Table 1. A comparison between undeformed and deformed ice of desalination and salt flux from the ice to the ocean. Time denotes how long the lead ice has been growing.
While the deformed ice has a higher desalination rate, the salt flux from the deformed ice is, for the most part, smaller than from the undeformed. The larger flux in the undeformed case results from the contribution to the salt flux made by the brine rejected from the new ice growth. In the deformed ice, growth stopped once it rafted. Though not as large as the initial growth stage of the undeformed ice, the salt flux resulting from deformation was still a considerable 0.7 kg m−2 d−1. Similar estimates of the salt flux from ice growing in a freezing lead made by Reference Gow,, Meese,, Perovich, and Tucker,Gow and others (1990) indicate that, after 7 d of growth, the salt flux decreased to an average of 0.17 kg m 2 d−1. More important to consider are differences in the character of the salt flux. In the undeformed case, the salts are rejected uniformly over a wider area. The deformed case exhibits much more spatial variability, with the brine injected through the stalactite into the ocean as a highly concentrated point source.
To determine the impact of this plume of cold, dense brine on the thermohaline structure of the underlying water, the question of its penetration depth must be addressed. The penetration depth is a function of the density of the brine plume relative to the underlying sea water, as well as the rate and character of the injected brine. In the case we observed, the brine plume exiting the stalactite had a higher density than the underlying sea water: 1120 kg m−3 compared with 1020 kg m−3. The stalactite plume, as seen in the video, appears to become nearly horizontal within 1 m of the stalactite. Since the ambient horizontal flow was an order of magnitude greater than the estimated vertical velocity of the brine in the stalactite, it is likely that the plume would have been carried several hundred meters downstream before it settled to the base of the mixed layer at a depth of 30 m. Rough estimates determined using smoke-stack theory (Reference Csanady,Csanady, 1965; Reference Slawson, and Csanady,Slawson and Csanady, 1967) indicate that over this distance the plume would have spread to a few meters across, causing a marked decrease in the salinity perturbation. This suggests that, while the stalactite plumes may be identifiable some distance from the source and could contribute to mixing in the upper layer, it is unlikely that they would penetrate the pycnocline. These comments are speculative but they do suggest that more detailed modeling aimed as assessing the impact of the brine plume on the thermohaline structure of the upper ocean is warranted.
This study has provided information on the formation and morphology of under-ice stalactites and has indicated that ice deformation may make an important contribution to the process of ice desalination. It has also raised questions that need to be addressed in future studies. These range in scope from the formation process of the stalactite to the oceanographie implications of deformationally driven brine drainage. For instance, in the case of stalactite formation, it was shown that the rafted ice had enough brine available to form the stalactite, if there was significant horizontal movement of the brine within the rafted sea-ice sheet. A thorough structural analysis of the deformed ice, directed towards identifying the network of brine drainage features, should be made to confirm this hypothesis. Another related question concerns the reason why the brine exits at regularly spaced, discrete points rather than continuously, along the line formed by the junction of the tilted, rafted ice and the level undeformed lead ice. One possible explanation is that there were small-amplitude undulations in the rafted ice that ran along the edge of the lead. The combination of these undulations with the junction of the rafted and undeformed ice (Fig. 2) would effectively create drainage basin for each of the stalactites. Elevation measurements to define the small-scale topography of the sea-ice sheet surrounding a stalactite would be useful in evaluating this hypothesis.
The role of deformation in the desalination of young ice is of potential oceanographic importance. Though limited, our observations during both the LeadEx pilot and main experiments indicate that the deformation of thin lead ice was pervasive and that rafting was common. It is important, therefore, to determine and quantify the relative contribution of deformation in the process of desalination of sea-ice sheets so that the entire process, including both the deformed and undeformed components, can be appropriately represented in models. The formation of the stalactites due to rafting of thin ice over thick ice probably represents an extreme case of enhanced desalination due to deformation. It does serve to show, however, that deformation can have an impact on both the quantity and nature of brine drainage from a sea-ice sheet. Other common cases that need to be considered include the rafting of thin ice over thin ice and the building of ridges and rubble. The impact of deformation on desalination in these instances may be different than the ease of thin-ice rafting over thick ice. For example, when thin ice rafts over thin ice, there is little change in freeboard compared to the instance when thin ice rafts over thick ice. Without this change in freeboard, there is little increase in the elevation of the sea-ice sheet and subsequently little enhanced drainage. In this case, deformation also increases the total ice thickness, thereby reducing the salt flux due to new growth. Another characteristic that is likely to influence the amount of brine drainage from deformed ice is the original thickness of the sea-ice sheet. Though the bulk salinity of a sea-ice sheet typically decreases with increasing thickness, thicker ice may have more total brine available for drainage than thinner ice. However, it does not necessarily follow that more brine will drain from thicker sea-ice sheets during a deformation event, since the thicker ice is colder and may lose a smaller fraction of its brine. Additional field and laboratory measurements of the temporal changes that occur in the salinity profile of the ice before and after deformation, coupled with measurements of the physical properties of the ice and detailed observations of the nature of the deformation, are necessary before these processes can be accurately described. Key issues that need to be investigated include determining the fraction of the brine that is lost from deformed ice and how this fraction varies with thickness and time. Such results then need to be coupled with large-scale studies of the extent and frequency of the various modes of deformation in order to assess the oceanographic impact of desalination due to deformation.
Summary
Large ice stalactites, 1-2 m in length and 0.05-0.10 m wide, were observed under a deformed lead in the Beaufort Sea. These stalactites grew rapidly around plumes of cold, dense brine. The source of the brine was enhanced drainage that resulted when salty lead ice rafted on to adjacent thick ice. A morphological analysis of a retrieved stalactite indicated that the brine flowed through a tortuous central channel of variable size that often branched into several sub-channels. The stalactite was composed of a combination of frazil (30%) and columnar (70%) ice, typically with an inner zone of frazil ice adjacent to the brine channel, surrounded by a ring of columnar ice. The columnar ice grew radially outward with the crystal c axes in the horizontal plane, perpendicular to the growth direction. Estimates of the heat and brine associated with stalactite formation determined an average brine-flow rate of approximately lmls−1 through the stalactite. Further calculations show that there was indeed an ample supply of cold, dense brine available in the rafted lead ice to provide this flow and form the stalactites. The brine drained through these stalactites represents a significant part of the brine lost from deformed young ice to the ocean. Since thin, warm, lead ice is easily and frequently deformed, it follows that the dynamics of the lead ice, as well as the thermodynamics, may have an effect on the thermohaline structure of the upper ocean.
Acknowledgements
The authors thank the U.S. Office of Naval Research for funding this work under the Leads Initiative. They also thank S.F. Ackley, A.J. Gow and three anonymous reviewers for their insightful and helpful comments.
Fig. 1.Underwater photograph of ice stalactites taken at 1300 h on 12 April 1992, using the ROV video camera. The translucent appearance of the meter-long stalactite in the foreground is line In its being composed of a loose collection of large, platy crystals with high porosity. Note in the background how the long stalactites lie in a line.
Fig. 2.Schematic illustrating the geometry of the rafted ice and stalactites.
Fig. 3.Photograph of the ice stalactite taken immediately after retrieval. Though there are some bumps and curves in the stalactite, the outer diameter is fairly constant along its length.
Fig. 4.lats off (a). the Cross-sectional area of ice and brine channels along the length of the stalactite and (b.) the relative amounts of frazil and columnar ice in the stalactite. zero depth corresponds to the base of the stalactite. | Abstract
During the LeadEx main field experiment, held in April 1992 in the Alaskan Beaufort Sea, a number of large ice stalactites were observed growing under young lead ice. Formation of the stalactites was associated with rafting of the thin, highly saline ice. The rafting caused the brine to drain rapidly from the ice at a temperature well below the freezing point of the surrounding water, which in turn caused ice to form in a hollow cylinder around the brine plume. Within a 15 h period after the rafting event, the stalactites, which were located approximately 10 m apart in a line along the upwind edge of a 150 m wide lead, had grown to a length of 2 m. A detailed structural analysis of the upper part of one of these stalactites revealed that the interior channel, down which the brine flowed, was bounded by a zone of frazil ice that developed into a shell of columnar ice. The growth of the columnar ice was directed radially outward and the c axes of these crystals were oriented perpendicular to their growth direction. Development of the stalactites illustrates the impact ice deformation can have on the process of brine rejection in freezing leads and potentially on the thermohaline structure of the upper ocean in the immediate vicinity of the lead.
Introduction
When sea ice grows, salts are rejected into the underlying water as a highly saline brine. Introduction of this brine causes an increase in both the salinity and density of the water and. consequently, the development of an unstable buoyancy flux. In the case of a freezing lead, the buoyancy flux is concentrated in a narrow band, intensifying the effects of this instability on the thermo-haline structure of the upper ocean. Models developed to assess the oceanographic impact of the process of brine rejection in freezing leads have lix used on the effects of lead width and the ice water velocity difference (Reference Kozo,Koza,:1983Reference Smith, and Morison,Smith and Morison. 1993). | yes |
Spelaeology | Can stalactites form underwater? | yes_statement | "stalactites" can "form" "underwater".. "underwater" conditions can lead to the formation of "stalactites". | https://www.mybestplace.com/en/article/hells-bells-the-mysterious-underwater-stalactites | Hell's Bells, the Mysterious Underwater Stalactites - MyBestPlace | Hell's Bells, the Mysterious Underwater Stalactites
Hell's Bells are unusual and splendid geological formations recently discovered in the deep sea of Cenote Zapote, a famous cave located west of Puerto Morelos, on the Mexican peninsula of Yucatan. Known as Hell’s Bells, these strange stalactites can measure up to two meters in height, and are immersed at a depth ranging from 28 to 33 meters. They are named after their bizarre form, as their shape resemble that of bells, giving them the nickname “Bells of Hells”.
The formations have been calcified under water in an environment with no light, immersed in a mixture of fresh water and a portion of toxic salt water, deprived from oxygen and rich in sulphide. A German-Mexican research group led by Prof. Dr. Wolfgang Stinnesbeck of the University of Heidelberg’s scientific institute recently published that the growth of these "cave formations" may have occurred through the action of some microbes involved in the cycle, and due to their ability to increase the pH and therefore support the precipitation of calcite. Researchers, using uranium-thorium dating of calcium carbonate, also discovered that growth took place underwater, proving that the "bells" were formed in ancient times.
According to Prof. Stinnesbeck, this underwater world provides an ecosystem with ideal conditions for the formation of the largest underwater calcareous stalcatites in the world, unique not only in shape and size, but also in their method of growth. Mineral deposits of this type discovered earlier are much smaller and less visible than these Hells Bells.
The Cenotes of Yucatan are one of the most fascinating natural wonders of Mexico and the world, combining religion and the mystery of the Mayan civilization. They are considered "sacred wells" as they are sources of drinking water with spiritual and healing properties. Within these caves, in the depths of the ocean lie hidden worlds yet to be discovered, submerged places of great beauty that. In the case of Hell’s Bells, just another prodigy of mother nature.
(Diving to this site requires expert divers)
"The photos on this site are owned by users or purchased from image banks" | Hell's Bells, the Mysterious Underwater Stalactites
Hell's Bells are unusual and splendid geological formations recently discovered in the deep sea of Cenote Zapote, a famous cave located west of Puerto Morelos, on the Mexican peninsula of Yucatan. Known as Hell’s Bells, these strange stalactites can measure up to two meters in height, and are immersed at a depth ranging from 28 to 33 meters. They are named after their bizarre form, as their shape resemble that of bells, giving them the nickname “Bells of Hells”.
The formations have been calcified under water in an environment with no light, immersed in a mixture of fresh water and a portion of toxic salt water, deprived from oxygen and rich in sulphide. A German-Mexican research group led by Prof. Dr. Wolfgang Stinnesbeck of the University of Heidelberg’s scientific institute recently published that the growth of these "cave formations" may have occurred through the action of some microbes involved in the cycle, and due to their ability to increase the pH and therefore support the precipitation of calcite. Researchers, using uranium-thorium dating of calcium carbonate, also discovered that growth took place underwater, proving that the "bells" were formed in ancient times.
According to Prof. Stinnesbeck, this underwater world provides an ecosystem with ideal conditions for the formation of the largest underwater calcareous stalcatites in the world, unique not only in shape and size, but also in their method of growth. Mineral deposits of this type discovered earlier are much smaller and less visible than these Hells Bells.
The Cenotes of Yucatan are one of the most fascinating natural wonders of Mexico and the world, combining religion and the mystery of the Mayan civilization. They are considered "sacred wells" as they are sources of drinking water with spiritual and healing properties. Within these caves, in the depths of the ocean lie hidden worlds yet to be discovered, submerged places of great beauty that. In the case of Hell’s Bells, just another prodigy of mother nature.
| yes |
Spelaeology | Can stalactites form underwater? | yes_statement | "stalactites" can "form" "underwater".. "underwater" conditions can lead to the formation of "stalactites". | https://www.indiatimes.com/technology/science-and-future/brinicle-ice-antarctica-sea-creatures-death-545049.html | Rare Icy Pillar Of Death Forms Under Antarctic Waters, Trapping ... | Nature surely works in mysterious ways and today, while browsing the web I came across an article by Earthly Mission about a phenomenon called ‘Brincles’ where icy water forms like an icy hand of death that touches the waterbed, killing all sea creatures along the way.
Jump To
What are brinicles?
Brinicles, also known as ice stalactites are ice structures that are formed from the top of the ocean to the bottom.They often appear as an underwater tornado and are commonly formed in the frigid Antarctic waters, they are specifically found in the polar regions of our planet. They’re also known as ice fingers of death as they’re known to trap sea creatures in its way, as soon as it touches the seafloor.
How are they formed?
Ice on the ocean surface consists of two components -- water and salt. When the water is in the freezing process, it releases most of the salt, leaving behind the ice crystal in its purest form. However, along with this, it also causes an increase in the presence of excess salt.
BBC
However, since it needs even colder temperatures to freeze the salt, the saltwater stays in liquid form, forming a saline brine channel within the porous ice.
However, when this floating sea ice cracks and leaks out the saline water solution into the ocean, it starts to sink due to its heavy nature and while it's touching the ground, it also ends up freezing due to the extreme cold, thus forming the ominous finger of death-like structure.
Black pools of death
What’s fascinating about brinicles is that they’re actually quite delicate, structurally and even the slightest touch can shatter them.However, this delicate ice finger while reaching the floor can easily trap sea creatures in it, which experts often refer to as black pools of death.
Accept the updated Privacy & Cookie Policy
The indiatimes.com privacy policy has been updated to align with the new data regulations in European Union. Please review and accept these changes below to continue using the website. We use cookies to ensure the best experience for you on our website. | Nature surely works in mysterious ways and today, while browsing the web I came across an article by Earthly Mission about a phenomenon called ‘Brincles’ where icy water forms like an icy hand of death that touches the waterbed, killing all sea creatures along the way.
Jump To
What are brinicles?
Brinicles, also known as ice stalactites are ice structures that are formed from the top of the ocean to the bottom. They often appear as an underwater tornado and are commonly formed in the frigid Antarctic waters, they are specifically found in the polar regions of our planet. They’re also known as ice fingers of death as they’re known to trap sea creatures in its way, as soon as it touches the seafloor.
How are they formed?
Ice on the ocean surface consists of two components -- water and salt. When the water is in the freezing process, it releases most of the salt, leaving behind the ice crystal in its purest form. However, along with this, it also causes an increase in the presence of excess salt.
BBC
However, since it needs even colder temperatures to freeze the salt, the saltwater stays in liquid form, forming a saline brine channel within the porous ice.
However, when this floating sea ice cracks and leaks out the saline water solution into the ocean, it starts to sink due to its heavy nature and while it's touching the ground, it also ends up freezing due to the extreme cold, thus forming the ominous finger of death-like structure.
Black pools of death
What’s fascinating about brinicles is that they’re actually quite delicate, structurally and even the slightest touch can shatter them. However, this delicate ice finger while reaching the floor can easily trap sea creatures in it, which experts often refer to as black pools of death.
Accept the updated Privacy & Cookie Policy
The indiatimes.com privacy policy has been updated to align with the new data regulations in European Union. Please review and accept these changes below to continue using the website. We use cookies to ensure the best experience for you on our website. | yes |
Spelaeology | Can stalactites form underwater? | yes_statement | "stalactites" can "form" "underwater".. "underwater" conditions can lead to the formation of "stalactites". | https://www.nps.gov/jeca/learn/nature/geology-of-jewel-cave.htm | Geology of Jewel Cave - Jewel Cave National Monument (U.S. ... | Explore the National Park Service
Exiting nps.gov
Alerts In Effect
Contact Us
Geology of Jewel Cave
Flowstone, stalactites, and other cave formations can be seen throughout Jewel Cave.
NPS/ B. Block
Unlike many other caves, Jewel Cave was not carved by underground rivers. Most of the cave was formed by slowly circulating, acid-rich groundwater. Its unique story begins with the geologic history of the Black Hills.
The oldest rocks in South Dakota’s Black Hills are Precambrian-era igneous and metamorphic rocks, which formed under heat and pressure nearly 2 billion years ago.
During the Mississippian time period, between 345 and 360 million years ago, a shallow sea covered the area. The sea advanced and receded several times. Sediment and calcium carbonate shells accumulated at the bottom of the sea, and over time, were compressed to form the Pahasapa Limestone (regionally known as the Madison Formation). The shells that formed the limestone came from ancient marine animals such as brachiopods. Fossils from Mississippian time are visible in the cave today.
As the limestone was forming, bodies of gypsum (calcium sulfate) crystallized from the seawater during periods of high evaporation. The gypsum formed irregular masses within the limestone.
Shortly after the limestone was deposited, thin gypsum beds in the upper part of the Pahasapa were dissolved away and the overlying limestone collapsed into the resulting voids. This marked the first stage of cave development at Jewel Cave.
The sea advanced and receded across the area several times. As the sea receded, the limestone was exposed to the open air. It was also exposed to fresh water from rainfall, which began to dissolve the limestone, creating sinkholes and caves. This was the second phase of cave development at Jewel Cave.
Around 320 million years ago, during the Pennsylvanian period, the Minnelusa Formation was deposited as freshwater streams carried sediments into the sea. The Minnelusa consists primarily of sandstone, with a few thin beds of limestone and dolomite. The Minnelusa covered the Pahasapa Limestone and filled the Mississippian sinkholes, cave entrances, and many passages. This reddish "paleofill" is visible in the upper passages of present-day Jewel Cave.
Approximately 60 million years ago, long after the sea receded for the last time, the Black Hills began to form. At the center of this new mountain range, the Precambrian rocks were thrust upward several thousand feet. The younger sedimentary rocks (the Minnelusa and Pahasapa) were eroded from the highest areas over the next 30 million years, exposing the Precambrain rocks to the surface. The remaining sedimentary rocks now surround the central Black Hills and tilt away from the center of the uplift. Jewel Cave is located in the southwestern Black Hills, where the sedimentary rocks tilt (or "dip") at an angle of approximately 4 degrees from the northeast to the southwest.
Nearly 40 million years ago, the climate changed and rainfall increased. Much of this freshwater made its way slowly underground. It first passed through the overlying soil, which was rich in carbon dioxide from decaying plants. The carbon dioxide transformed the water into carbonic acid. This weak acid traveled through fractures in the rock until it reached the water table, which rose and filled cracks in the limestone. This standing or slow-moving acid-rich water formed the majority of Jewel Cave. The water slowly drained from the cave as surface erosion created exits for the water in the form of springs.
Crystal Growth
The blunt nailhead spar crystals that line most of the cave’s walls are not forming today. They formed when the cave was still completely or partially filled with water. As acidic water dissolved the limestone and created the cave, it became saturated with calcite. Some of this calcite was re-deposited underwater on the walls of the cave, in the form of spar.
Pockets of dogtooth spar, which are sharp-ended crystals, formed when the limestone was still deeply buried under younger rocks. They once lined the openings of early caves that were not completely filled with sediment from deposition of the Minnelusa Formation.
Speleothem Formation
Once the water that filled the cave drained away, cave formations (or speleothems) began to form. Many of these are still forming today.
Calcite speleothems form as surface water makes its way through carbon dioxide-rich soil and travels underground through the limestone. The resulting carbonic acid picks up calcite (CaCO3) as it dissolves the limestone. Once it enters an air-filled cave passage, the acid loses its carbon dioxide to the cave air and becomes water again. Non-acidic water cannot hold calcite in solution, so it deposits the calcite in the form of stalactites, stalagmites, flowstone, draperies, or popcorn. The type of formation created depends largely on whether the water is dripping, trickling, or seeping when it enters the cave passage.
Gypsum speleothems form because water seeping into the cave often contains small amounts of gypsum (calcium sulfate, CaSO4) picked up from the limestone or overlying sandstone. When this water evaporates in the cave, it deposits gypsum in the form of needles, beards, flowers, or spiders. Gypsum formations are found only in dry parts of the cave.
Hydromagnesite speleothems are often the by-product of frostwork or popcorn formation. When calcite and aragonite crystallize out of water seeping from the cave walls, magnesium becomes more concentrated than calcium in the remaining water. In areas of very high evaporation, the magnesium will precipitate out as hydromagnesite. Hydromagnesite often appears on the walls as small white clumps resembling chalky cottage cheese. Rare hydromagnesite balloons exist in a few areas of the cave, where the pasty material has been inflated.
For More Information
Visit the Media Presentations page to see informative podcasts showing how the cave was formed and some of the underground beauty of Jewel Cave.
Last updated: August 13, 2021
Park footer
Contact Info
Mailing Address:
11149 U.S. Hwy. 16
Building B12
Custer,
SD
57730
Phone:
605 673-8300
The main phone line connects visitors with staff at the visitor center. Throughout the year, the phone line is monitored by staff on a daily basis, excluding holidays and days with limited visitor services. Please be advised that after-hours messages are not taken on the system; visitors are encouraged to call the visitor center during normal operations and speak with a park ranger for assistance. | It first passed through the overlying soil, which was rich in carbon dioxide from decaying plants. The carbon dioxide transformed the water into carbonic acid. This weak acid traveled through fractures in the rock until it reached the water table, which rose and filled cracks in the limestone. This standing or slow-moving acid-rich water formed the majority of Jewel Cave. The water slowly drained from the cave as surface erosion created exits for the water in the form of springs.
Crystal Growth
The blunt nailhead spar crystals that line most of the cave’s walls are not forming today. They formed when the cave was still completely or partially filled with water. As acidic water dissolved the limestone and created the cave, it became saturated with calcite. Some of this calcite was re-deposited underwater on the walls of the cave, in the form of spar.
Pockets of dogtooth spar, which are sharp-ended crystals, formed when the limestone was still deeply buried under younger rocks. They once lined the openings of early caves that were not completely filled with sediment from deposition of the Minnelusa Formation.
Speleothem Formation
Once the water that filled the cave drained away, cave formations (or speleothems) began to form. Many of these are still forming today.
Calcite speleothems form as surface water makes its way through carbon dioxide-rich soil and travels underground through the limestone. The resulting carbonic acid picks up calcite (CaCO3) as it dissolves the limestone. Once it enters an air-filled cave passage, the acid loses its carbon dioxide to the cave air and becomes water again. Non-acidic water cannot hold calcite in solution, so it deposits the calcite in the form of stalactites, stalagmites, flowstone, draperies, or popcorn. The type of formation created depends largely on whether the water is dripping, trickling, or seeping when it enters the cave passage.
Gypsum speleothems form because water seeping into the cave often contains small amounts of gypsum (calcium sulfate, | no |
Spelaeology | Can stalactites form underwater? | no_statement | "stalactites" cannot "form" "underwater".. the formation of "stalactites" does not occur "underwater". | https://www.bbc.co.uk/blogs/23degrees/climate_change | Orbit: Earth's Extraordinary Journey - BBC | The second instalment of the series follows the Earth's journey from the start of January to the Spring Equinox in March. Available on iplayer. What did you think?
Kate begins the film on a day with a very significant point in our Earth's journey - Perihelion. Kate climbs Aonach Mor mountain, one of the highest mountains in Scotland, which brings her as close to the Sun as she'll ever be for the entire year.
This however is not because of where she is but because of the point the Earth has reached in its orbit around the Sun. In fact we kick started our blog on this day just over a year ago, when we explored the elliptical shape of our planet's orbit and how significant this was to our understanding of Earth's climate.
Later in the film Helen explains how the proximity of the Earth to the Sun doesn't guarantee warmth - which brings us to the tilt of the Earth (23.4 degrees) - a theme we explore in further detail in episode three.
Throughout this episode Kate and Helen explore the increase in solar radiation and how land and ocean respond to it.
Kate drives over a frozen lake in Canada with an ice road trucker in one of the coldest places in that region and learns how important this ice formation is to connecting communities.
In this film we also tackle ice ages and how over time, as Earth has repeated it's annual journey, it's climate has changed.
Helen dives under water in Belize to discover how sea levels have risen and fallen over time due to ice age - and explores the three cycles that need to be right in order for another ice age to exist.
Sharks and stalactites may be close to each other in the dictionary, but you would think that reality keeps them a safe distance apart. For a start, sharks aren't known for inhabiting caves, and every stalactite I've ever seen has been in a cave. Secondly, stalactites can't grow underwater and sharks can't breathe if they're taken out of water. That sounds like a clinching argument if ever I heard one, but the thing I love about science is that our world is more complicated and interesting than that. Not only did I see lots of sharks swim past lots of stalactites this week, but this weird combination tells us something fundamental about our planet. And it's not that a flock of flying sharks has started spelunking because they suddenly fancied bats for dinner.
Belize is just next to Guatemala and south of Mexico, tucked into the back of the Caribbean sea. Its coastline is littered with islands and coral reefs, but what brought Jacques Cousteau here in 1970 is circular deep blue hole in the reef. We arrived in Belize last Monday laden with SCUBA gear, all ready to explore that hole.
Going into the hole was pretty eerie. There is sand and coral right up to the edge, and then the vertical wall just drops away into the darkness. We left all the brightness and light and colourful fish behind, and sank slowly. After going down a little way, all I could see was the rock wall stretching into the gloom. I found looking away from the wall a bit disconcerting because it felt as though anything could swim out of the black, even though I knew perfectly well how unlikely that was. We kept going down further and further, and I stared at the wall, straining to see what on earth brings people here. A reef shark swam past just two metres underneath me. And then the gloom readjusted itself just in front of me and I was looking at a stalactite that was nearly a metre wide at the top where I was, and was probably 5 metres long, pointing downwards into the depths. It was monstrous. There was an overhang, like an upside-down shelf a few metres deep, and looking along it I could see other stalactites hanging down, all of a similar size. We swam along the overhang, and the sharks cruised past us a few metres further out from the wall.
Dives that deep have to be short, and we had work to do, so it was only that night that the scale and the incongruity of what I'd seen sank in.
The size of the stalactites helps you understand the size of the story they're telling. Both are gigantic, almost too big to fit into a human brain. The reason that the stalactites are down there at all is that during ice ages, sea level gets much much lower. 15,000 years ago, the last time those stalactites were growing, they were on a cliff in dry air because sea level was 120 metres lower than it is today. That's the sort of fact that you can read and understand logically, and it's something that I had known for years, but it's hard to digest properly. Read it again: 120 metres lower. That is an awful lot of ocean that wasn't there. Floating in the darkness with 40 metres of water above me, next to a rock wall that kept going downwards as far as I could see, I came closer than I ever have to really understanding the enormity of the changes that ice ages bring to Earth. Oh yeah, and there were sharks too.
One of the things that amazes me about our planet is how it carries clues to its own past. It's a bit like a giant memory stick, the trick is find the right file. And today Helen Czerski is on the trail of one of these files, but it's not buried underground it's buried deep under water.
Helen is pushing the limits of her endurance, diving 40 metres below the waves, searching for evidence of what our world looked like 20,000 years ago. It might seem odd to be going deep under the water to unearth our climate past but you'd be surprised what you find down there.
Helen's diving the Great Blue Hole 60 miles off the coast of Belize, it's as the name suggest a great big round hole that goes down over 120 metres. It was once a cave but the roof collapsed leaving the deep blue hole. It's more than just a wonderful piece of natural architecture. It's also a window into our past. Because deep down in the hole, are clues to one of the most dramatic events in our planet's history.
It's a very tricky and technical dive as it's so deep, so Helen is accompanied by a very experienced dive team. Fortunately she's an experienced and highly qualified diver herself, so she's the prefect person for the job. Though she did have to learn how to use a special facemask designed for presenters to talk underwater. Even with all the experience on show it's still a daunting dive, but she's following in some famous footsteps. Jacques Costeau explored the Blue Hole back in 1970.
As she descends Helen must carefully monitor her buoyancy, at these depths she doesn't want to go up or down too quickly, that's not good news, plus the sheer walls of the hole will make the dive feel very enclosed.
At around 40 metres she will reach what she's looking for. Here the walls of the hole are cut away and there are some incredible rock formations several metres high. These formations are stalactites, which is kind of an odd thing to find, because if I remember my geology you shouldn't find stalactites in the ocean because they can't form underwater. Stalactites are created when mineral rich water drips from the roof of a cave over hundreds or even thousands of years, leaving behind mineral deposits. Over time these build up to create the beautiful structures. But they can only form on land so what are they doing 40 metres down the blue hole?
Here's what must have happened. At some point back in time, this cave must have been above sea level, which means that when these stalactites formed, the ocean must have been much lower than it is today. These stalactites not only show us that sea levels have changed they also can show us when.
When Cousteau explored the hole they brought up a broken stalactite and when they cut a cross-section they found a series of rings, a bit like tree rings. Each of the rings represents a period of growth when the stalactite was exposed to air. That growth would stop when it became submerged again. Cousteau's stalactite shows three growth stages, so it's a record of changing sea levels over time.
Scientists have precisely dated stalactites from the Blue Hole and by comparing stalactites from around the world with other data like Antarctic ice cores, they've built up a picture of changing sea levels dating back hundreds of thousands of years. What it reveals is that sea levels here in the Caribbean and across the world, have dramatically risen and fallen over time.
Just 20,000 years ago, the sea was an incredible 120 metres lower than it is today. That means almost the entire Blue Hole cave system would have been on dry land. But the world has a finite amount of water in it at any given time. So if that huge mass of water wasn't in the oceans just where was it? Well believe it or not it was on land, but not as water but as ice.
20,000 years ago the earth was gripped by an ice age.
How did this happen, well you'll find out when our series airs...in 2012 mind you :-)
Noctilucent clouds are a summertime phenomenon which were first observed in around 1885. Noctiluscent [ the name means night shining in Latin ] are high wipsy clouds made of tiny crystals of water ice up to 100 nanometers in diameter. They are the highest clouds in the Earth's atmosphere, occurring in the mesosphere at altitudes of around 76 to 85 kilometers (47 to 53 mi).
Image courtesy of Brendan Alexander/Flickr, Ireland, June 15 2011
Image courtesy of Brendan Alexander/Flickr, Ireland, June 15 2011
Clouds in the Earth's lower atmosphere form in a process called nucleation when water gathers on dust particles, but Noctiluscent clouds also form directly from water vapour as well as forming around on dust particles. It is unclear where the dust or water in the mesosphere comes from but it's thought that the particles may be from dust from micrometeors, although some scientists think dust from volcanoes may also be involved. The source of the water is equally unclear as the mesosphere contains very little moisture - approximately one hundred millionth that of air from the Sahara desert but it's possible that the water comes from lower in the atmosphere or from chemical reactions in the upper atmosphere. This water vapour freezes directly into ice crystal to form the clouds in the thin upper atmosphere when temperatures drop to about -120 °C (-184 °F).
For many years Noctiluscent clouds were a very rare sighting, but over the past 20 years they have become more common. Originally confined to the higher latitudes they are increasingly observed in lower latitudes nearer the equator. So why are they becoming more common and reaching lower latitudes?
Nasa's AIM satellite mission (Aeronomy of Ice in the Mesosphere) which launched in 2007 was set up to study the Noctilucent clouds and to answer these questions.
Dr James Russell at the University of Hampton explains to me some of the findings of the AIM mission (James Russell III of Hampton University, Hampton, Va. Is AIM's principal investigator):
"Noctilucent clouds are the highest cloud in the Earth's atmosphere forming in the mesosphere at high altitudes (approximately 76 to 85 kilometers, or 47 to 53 miles). It seems odd that they are a summer time phenomenon when they feed off extremely cold temperatures, however as heat warms the air near the ground, the air rises. As it rises, it also expands since atmospheric pressure decreases with height ( temperatures in the mesosphere down past a freezing -210º F (-134 ºC).
We are still unsure exactly why they are increasing in lower latitudes or showing up brighter, they are like a geophysical light bulb, you go from no clouds to full formed clouds in days. This may be due to a sudden change in temperature at the altitude that these clouds are formed. They form in an atmosphere with 100 times lower pressure than at earth surfaces.
During the summer season the temperature stays very low at the poles. For a long time we thought the increase in frequency was a result of temperature decrease but now our research is leaning more to water vapour. Increase in water vapour increases the frequency of clouds. The primary reason for more water vapour at higher altitudes is methane which we are most likely responsible for.
Our research still has far to go however. We have been at solar minimum whilst the AIM mission has been out. Heating is different and dynamics is different so we need to continue our research for a full solar cycle."
The mission has now been extended until 2014 and Dr Russell thinks that the additional research may show a link between frequency in Noctilucent clouds and human activity and that this data may prove helpful to climate scientists investigating climate change.
(The wonderful views of the processing NLC display were interrupted by no less than 3 majestic passes of the ISS. Finally as the Sun was creeping towards the horizon the brilliant Jupiter came into view in the north eastern twilight. Upon return to my home when reviewing my photos I realised I captured an Iridium flare along with the NLC. A great ending to a truly magical night. June 15 2011)
About this blog
Orbit: Earth's Extraordinary Journey explores the relationship between the Earth's orbit and the weather. Previously '23 Degrees' (working title); on this blog the weather community were invited to discuss their experiences of severe weather as and when events developed and share their iwitness footage throughout 2011. The audience were provided with an insight to the making of the series and exclusive behind the scenes footage. Follow us on Twitter.
BBC links
This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so. | Jacques Costeau explored the Blue Hole back in 1970.
As she descends Helen must carefully monitor her buoyancy, at these depths she doesn't want to go up or down too quickly, that's not good news, plus the sheer walls of the hole will make the dive feel very enclosed.
At around 40 metres she will reach what she's looking for. Here the walls of the hole are cut away and there are some incredible rock formations several metres high. These formations are stalactites, which is kind of an odd thing to find, because if I remember my geology you shouldn't find stalactites in the ocean because they can't form underwater. Stalactites are created when mineral rich water drips from the roof of a cave over hundreds or even thousands of years, leaving behind mineral deposits. Over time these build up to create the beautiful structures. But they can only form on land so what are they doing 40 metres down the blue hole?
Here's what must have happened. At some point back in time, this cave must have been above sea level, which means that when these stalactites formed, the ocean must have been much lower than it is today. These stalactites not only show us that sea levels have changed they also can show us when.
When Cousteau explored the hole they brought up a broken stalactite and when they cut a cross-section they found a series of rings, a bit like tree rings. Each of the rings represents a period of growth when the stalactite was exposed to air. That growth would stop when it became submerged again. Cousteau's stalactite shows three growth stages, so it's a record of changing sea levels over time.
Scientists have precisely dated stalactites from the Blue Hole and by comparing stalactites from around the world with other data like Antarctic ice cores, they've built up a picture of changing sea levels dating back hundreds of thousands of years. What it reveals is that sea levels here in the Caribbean and across the world, have dramatically risen and fallen over time.
| no |
Spelaeology | Can stalactites form underwater? | no_statement | "stalactites" cannot "form" "underwater".. the formation of "stalactites" does not occur "underwater". | https://www.bbc.co.uk/blogs/23degrees/locations | Orbit: Earth's Extraordinary Journey - BBC | In order to see this content you need to have both Javascript enabled and Flash Installed. Visit BBC Webwise for full instructions. If you're reading via RSS, you'll need to visit the blog to access this content.
Sharks and stalactites may be close to each other in the dictionary, but you would think that reality keeps them a safe distance apart. For a start, sharks aren't known for inhabiting caves, and every stalactite I've ever seen has been in a cave. Secondly, stalactites can't grow underwater and sharks can't breathe if they're taken out of water. That sounds like a clinching argument if ever I heard one, but the thing I love about science is that our world is more complicated and interesting than that. Not only did I see lots of sharks swim past lots of stalactites this week, but this weird combination tells us something fundamental about our planet. And it's not that a flock of flying sharks has started spelunking because they suddenly fancied bats for dinner.
Belize is just next to Guatemala and south of Mexico, tucked into the back of the Caribbean sea. Its coastline is littered with islands and coral reefs, but what brought Jacques Cousteau here in 1970 is circular deep blue hole in the reef. We arrived in Belize last Monday laden with SCUBA gear, all ready to explore that hole.
Going into the hole was pretty eerie. There is sand and coral right up to the edge, and then the vertical wall just drops away into the darkness. We left all the brightness and light and colourful fish behind, and sank slowly. After going down a little way, all I could see was the rock wall stretching into the gloom. I found looking away from the wall a bit disconcerting because it felt as though anything could swim out of the black, even though I knew perfectly well how unlikely that was. We kept going down further and further, and I stared at the wall, straining to see what on earth brings people here. A reef shark swam past just two metres underneath me. And then the gloom readjusted itself just in front of me and I was looking at a stalactite that was nearly a metre wide at the top where I was, and was probably 5 metres long, pointing downwards into the depths. It was monstrous. There was an overhang, like an upside-down shelf a few metres deep, and looking along it I could see other stalactites hanging down, all of a similar size. We swam along the overhang, and the sharks cruised past us a few metres further out from the wall.
Dives that deep have to be short, and we had work to do, so it was only that night that the scale and the incongruity of what I'd seen sank in.
The size of the stalactites helps you understand the size of the story they're telling. Both are gigantic, almost too big to fit into a human brain. The reason that the stalactites are down there at all is that during ice ages, sea level gets much much lower. 15,000 years ago, the last time those stalactites were growing, they were on a cliff in dry air because sea level was 120 metres lower than it is today. That's the sort of fact that you can read and understand logically, and it's something that I had known for years, but it's hard to digest properly. Read it again: 120 metres lower. That is an awful lot of ocean that wasn't there. Floating in the darkness with 40 metres of water above me, next to a rock wall that kept going downwards as far as I could see, I came closer than I ever have to really understanding the enormity of the changes that ice ages bring to Earth. Oh yeah, and there were sharks too.
One of the things that amazes me about our planet is how it carries clues to its own past. It's a bit like a giant memory stick, the trick is find the right file. And today Helen Czerski is on the trail of one of these files, but it's not buried underground it's buried deep under water.
Helen is pushing the limits of her endurance, diving 40 metres below the waves, searching for evidence of what our world looked like 20,000 years ago. It might seem odd to be going deep under the water to unearth our climate past but you'd be surprised what you find down there.
Helen's diving the Great Blue Hole 60 miles off the coast of Belize, it's as the name suggest a great big round hole that goes down over 120 metres. It was once a cave but the roof collapsed leaving the deep blue hole. It's more than just a wonderful piece of natural architecture. It's also a window into our past. Because deep down in the hole, are clues to one of the most dramatic events in our planet's history.
It's a very tricky and technical dive as it's so deep, so Helen is accompanied by a very experienced dive team. Fortunately she's an experienced and highly qualified diver herself, so she's the prefect person for the job. Though she did have to learn how to use a special facemask designed for presenters to talk underwater. Even with all the experience on show it's still a daunting dive, but she's following in some famous footsteps. Jacques Costeau explored the Blue Hole back in 1970.
As she descends Helen must carefully monitor her buoyancy, at these depths she doesn't want to go up or down too quickly, that's not good news, plus the sheer walls of the hole will make the dive feel very enclosed.
At around 40 metres she will reach what she's looking for. Here the walls of the hole are cut away and there are some incredible rock formations several metres high. These formations are stalactites, which is kind of an odd thing to find, because if I remember my geology you shouldn't find stalactites in the ocean because they can't form underwater. Stalactites are created when mineral rich water drips from the roof of a cave over hundreds or even thousands of years, leaving behind mineral deposits. Over time these build up to create the beautiful structures. But they can only form on land so what are they doing 40 metres down the blue hole?
Here's what must have happened. At some point back in time, this cave must have been above sea level, which means that when these stalactites formed, the ocean must have been much lower than it is today. These stalactites not only show us that sea levels have changed they also can show us when.
When Cousteau explored the hole they brought up a broken stalactite and when they cut a cross-section they found a series of rings, a bit like tree rings. Each of the rings represents a period of growth when the stalactite was exposed to air. That growth would stop when it became submerged again. Cousteau's stalactite shows three growth stages, so it's a record of changing sea levels over time.
Scientists have precisely dated stalactites from the Blue Hole and by comparing stalactites from around the world with other data like Antarctic ice cores, they've built up a picture of changing sea levels dating back hundreds of thousands of years. What it reveals is that sea levels here in the Caribbean and across the world, have dramatically risen and fallen over time.
Just 20,000 years ago, the sea was an incredible 120 metres lower than it is today. That means almost the entire Blue Hole cave system would have been on dry land. But the world has a finite amount of water in it at any given time. So if that huge mass of water wasn't in the oceans just where was it? Well believe it or not it was on land, but not as water but as ice.
20,000 years ago the earth was gripped by an ice age.
How did this happen, well you'll find out when our series airs...in 2012 mind you :-)
On our journey around the Sun for 23 Degrees we are focussing on three main themes that control our climate and weather, Tilt, Orbit and Spin. And at the moment we are filming the show about Spin.
The team are on the road in Ecuador. They have gone to the Ecuador rainforest to learn how solar energy powers a power circulation system in the atmosphere that dictates the climate in bands around the world. Kate is also going to drive along the equator at around 1060 miles an hour. Well not quite - her car's going 60 but the planet at the equator is spinning at 1000mph, so for a few moments she's probably the fastest person on the planet.
The next stop after the heat of South America is the Bay of Fundy in Canada. Fundy has the highest tidal range in the world and Kate and the team are going to witness it first hand.
From there it's down to Bermuda to go snorkelling to learn about our planets spin , nice work if you can get it.
One of the things I find fascinating about our planet is that it carries a record of its own history written in its rocks. Some of this history is obvious - you can't miss the impact of giant craters blasted out by asteroid strikes. But some are less obvious - unless you know where to look. Kate's studying corals as these tiny creatures hold the secret to our distant past - and how fast our planet once spun.
Every day corals lays down growth rings of limestone and these daily growth rings build up to create an annual growth ring [a bit like a tree ring]. If you count the daily rings you get 365 days a year, which is what you'd expect.
Image credit Owen Sherwood
But if you look at 400 million year old corals you get a very different picture. They have rings just like the modern coral but they are a little bit narrower. But what's really surprising is that if you count the daily growth rings you don't find 365 you find 410. That means that when this coral was alive 400 million years ago the world was a very different place.
But however you measure it - in hours or days the Earths' orbit around the Sun always takes the same amount of time. A year is always constant. The only explanation for the ancient corals having 410 daily growth rings is that millions of years ago the days were shorter. So when this coral was in the oceans there were less hours in each day - in fact a day lasted just 21 hours. And for that to happen the Earth must have been spinning faster.
If we calculate back even further in time we find that around 4 billion years ago, when the Earth was still young, a day lasted just 6 hours. Which means the planet was spinning 4 times faster than today.
Hurricane hunting was not supposed to be like this. The Sun was shining, there were butterflies everywhere, and there wasn't enough wind to blow out a candle on a birthday cake. On the plus side, we had to give ourselves top marks for trying and I didn't have to get drenched again.
Twenty four hours earlier, things had looked very different. We had been tracking several Atlantic storms, and finally Tropical Storm Nate was forecast to make landfall in the Gulf of Mexico as a category 2 hurricane. I've never paid that much attention to tropical storms in the past, but it turns out that storm-monitoring is surprisingly addictive. Tropical disturbances in the Atlantic often start out near the coast of Africa, and then they crawl across the ocean to the west, growing or petering out as they go. The storms move at about 15 mph, so they'd lose a race to any half-decent cyclist. That gives the nascent addict many happy days of monitoring storm strength and direction. There are also exciting milestones such as the day the storm is given its name, and most important of all, the day the maximum sustained winds first reach 74 mph and the storm is declared to have graduated to hurricane status. Most storms don't make it that far. If they drift too far north, they get broken up or run out of fuel, and they can be decapitated by high-level winds, never giving them a chance to grow.
Satellite image captured 09-sep-2011
Tropical Storm Nate was interesting because it had skipped the slog across the Atlantic ocean, and had instead formed entirely inside the Gulf of Mexico, stuck in the gap between the Yucatan and the rest of Mexico. It was pootling westwards at only 3 or 4 miles an hour, feeding off the nice warm bath it was trapped in, and forecast to hit the Mexican coastline near Veracruz as a category 2 hurricane. We thought that we finally had a winner, and off we went.
The five of us arrived in Veracruz in the dark, only 18 hours before the centre of the storm was due to hit the coastline. It was horribly hot and sticky, and the evening gloom made everything feel very ominous. The wind was picking up and we were excited and a bit nervous about what would happen in the morning.
What happened was that we learned that Tropical Storm Nate had apparently become "disorganized" overnight. I've got friends like that, but I wasn't expecting it from a giant atmospheric whirlpool. Josh Wurman (our hurricane expert) inspected the satellite images on his computer screen and made "meh" noises whenever the director asked him where exactly the storm had gone.
The tight spiral that we had seen the previous day had widened, split and was indeed looking pretty disorganized. It rained hard for a couple of hours that morning, so we did film some nasty weather, but soon the sun and the butterflies came out again. We stared at the flat calm ocean and wondered whether to blame the butterflies for flapping their wings.
In our absence, of course, the remnants of Hurricane Katia were passing over Scotland. The winds in Scotland this weekend reached twice the speeds we saw in Mexico. We are not bitter about this. Honest. We had all thought that filming a hurricane would be much easier than filming a tornado, just because hurricanes last for weeks and their tracks can now be predicted very accurately. But we learnt the hard way that the complications of our atmosphere are still not perfectly understood, and that even a large storm can vanish almost overnight if the conditions are right. But still, it's all part of experiencing the weather, and I'm actually quite glad that the town where we were was able to have a normal Monday morning, rather than dealing with the damage and flooding that a hurricane would have left behind.
About this blog
Orbit: Earth's Extraordinary Journey explores the relationship between the Earth's orbit and the weather. Previously '23 Degrees' (working title); on this blog the weather community were invited to discuss their experiences of severe weather as and when events developed and share their iwitness footage throughout 2011. The audience were provided with an insight to the making of the series and exclusive behind the scenes footage. Follow us on Twitter.
BBC links
This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so. | At around 40 metres she will reach what she's looking for. Here the walls of the hole are cut away and there are some incredible rock formations several metres high. These formations are stalactites, which is kind of an odd thing to find, because if I remember my geology you shouldn't find stalactites in the ocean because they can't form underwater. Stalactites are created when mineral rich water drips from the roof of a cave over hundreds or even thousands of years, leaving behind mineral deposits. Over time these build up to create the beautiful structures. But they can only form on land so what are they doing 40 metres down the blue hole?
Here's what must have happened. At some point back in time, this cave must have been above sea level, which means that when these stalactites formed, the ocean must have been much lower than it is today. These stalactites not only show us that sea levels have changed they also can show us when.
When Cousteau explored the hole they brought up a broken stalactite and when they cut a cross-section they found a series of rings, a bit like tree rings. Each of the rings represents a period of growth when the stalactite was exposed to air. That growth would stop when it became submerged again. Cousteau's stalactite shows three growth stages, so it's a record of changing sea levels over time.
Scientists have precisely dated stalactites from the Blue Hole and by comparing stalactites from around the world with other data like Antarctic ice cores, they've built up a picture of changing sea levels dating back hundreds of thousands of years. What it reveals is that sea levels here in the Caribbean and across the world, have dramatically risen and fallen over time.
Just 20,000 years ago, the sea was an incredible 120 metres lower than it is today. That means almost the entire Blue Hole cave system would have been on dry land. But the world has a finite amount of water in it at any given time. | no |
Spelaeology | Can stalactites form underwater? | no_statement | "stalactites" cannot "form" "underwater".. the formation of "stalactites" does not occur "underwater". | https://phys.org/news/2017-11-unique-underwater-stalactites.html | Researchers study unique underwater stalactites | Researchers study unique underwater stalactites
The Hells Bells in the El Zapote cave near Puerto Morelos on the Yucatán Peninsula. Credit: E.A.N./IPA/INAH/MUDE/UNAM/HEIDELBERG
In recent years, researchers have identified a small group of stalactites that appear to have calcified underwater instead of in a dry cave. The "Hells Bells" in the El Zapote cave near Puerto Morelos on the Yucatán Peninsula are just such formations. A German-Mexican research team led by Prof. Dr Wolfgang Stinnesbeck from the Institute of Earth Sciences at Heidelberg University recently investigated how these bell-shaped, metre-long formations developed, assisted by bacteria and algae. The results of their research have been published in the journal Palaeogeography, Palaeoclimatology, Palaeoecology.
Hanging speleothems, also called stalactites, develop through physicochemical processes in which calcium carbonate-rich water dries up. Normally, they rejuvenate and form a tip at the lower end from which drops of water fall to the cave floor. The formations in the El Zapote cave, which are up to two metres long, expand conically downward and are hollow, with round, elliptical or horseshoe-shaped cross-sections. Not only are they unique in shape and size, but also their mode of growth, according to Prof. Stinnesbeck. They grow in a lightless environment near the base of a 30 m freshwater unit immediately above a zone of oxygen-depleted and sulfide-rich toxic saltwater. "The local diving community dubbed them Hells Bells, which we think is especially appropriate," says Wolfgang Stinnesbeck. Uranium-thorium dating of the calcium carbonate verifies that these formations must have actually grown underwater, proving that the Hells Bells must have formed in ancient times. Even then, the deep regions of the cave had already been submerged for thousands of years.
According to the Heidelberg geoscientist, this underwater world on the Yucatán Peninsula in Mexico represents an enigmatic ecosystem providing the conditions for the formation of the biggest underwater speleothems worldwide. Previously discovered speleothems of this type are much smaller and less conspicuous than the Hells Bells, adds Prof. Stinnesbeck. The researchers suspect that the growth of these hollow structures is tied to the specific physical and biochemical conditions near the halocline, the layer that separates the freshwater from the underlying saltwater. "Microbes involved in the nitrogen cycle, which are still active today, could have played a major role in calcite precipitation because of their ability to increase the pH," explains Dr Stinnesbeck.
The Hells Bells in the El Zapote cave near Puerto Morelos on the Yucatán Peninsula. Credit: E.A.N./IPA/INAH/MUDE/UNAM/HEIDELBERG
The Hells Bells in the El Zapote cave near Puerto Morelos on the Yucatán Peninsula. Credit: E.A.N./IPA/INAH/MUDE/UNAM/HEIDELBERG
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Let us know if there is a problem with our content
Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page.
For general inquiries, please use our contact form.
For general feedback, use the public comments section below (please adhere to guidelines).
Please select the most appropriate category to facilitate processing of your request
Your message to the editors
Your email (only if you want to be contacted back)
Thank you for taking time to provide your feedback to the editors.
Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.
E-mail the story
Researchers study unique underwater stalactites
Note
Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose.
The information you enter will appear in your e-mail message and is not retained by Phys.org in any form.
Your message
Newsletter sign up
Get weekly and/or daily updates delivered to your inbox.
You can unsubscribe at any time and we'll never share your details to third parties.
Your Privacy
This site uses cookies to assist with navigation, analyse your use of our services, collect data for ads personalisation and provide content from third parties.
By using our site, you acknowledge that you have read and understand our Privacy Policy
and Terms of Use. | Researchers study unique underwater stalactites
The Hells Bells in the El Zapote cave near Puerto Morelos on the Yucatán Peninsula. Credit: E.A.N./IPA/INAH/MUDE/UNAM/HEIDELBERG
In recent years, researchers have identified a small group of stalactites that appear to have calcified underwater instead of in a dry cave. The "Hells Bells" in the El Zapote cave near Puerto Morelos on the Yucatán Peninsula are just such formations. A German-Mexican research team led by Prof. Dr Wolfgang Stinnesbeck from the Institute of Earth Sciences at Heidelberg University recently investigated how these bell-shaped, metre-long formations developed, assisted by bacteria and algae. The results of their research have been published in the journal Palaeogeography, Palaeoclimatology, Palaeoecology.
Hanging speleothems, also called stalactites, develop through physicochemical processes in which calcium carbonate-rich water dries up. Normally, they rejuvenate and form a tip at the lower end from which drops of water fall to the cave floor. The formations in the El Zapote cave, which are up to two metres long, expand conically downward and are hollow, with round, elliptical or horseshoe-shaped cross-sections. Not only are they unique in shape and size, but also their mode of growth, according to Prof. Stinnesbeck. They grow in a lightless environment near the base of a 30 m freshwater unit immediately above a zone of oxygen-depleted and sulfide-rich toxic saltwater. "The local diving community dubbed them Hells Bells, which we think is especially appropriate," says Wolfgang Stinnesbeck. Uranium-thorium dating of the calcium carbonate verifies that these formations must have actually grown underwater, proving that the Hells Bells must have formed in ancient times. Even then, the deep regions of the cave had already been submerged for thousands of years.
| yes |
Gerontology | Can stem cell therapy reverse aging? | yes_statement | "stem" "cell" "therapy" can "reverse" "aging".. "aging" can be "reversed" through "stem" "cell" "therapy". | https://www.cnn.com/2023/01/12/health/reversing-aging-scn-wellness/index.html | Aging can be reversed in mice. Are people next? | CNN | In Boston labs, old, blind mice have regained their eyesight, developed smarter, younger brains and built healthier muscle and kidney tissue. On the flip side, young mice have prematurely aged, with devastating results to nearly every tissue in their bodies.
The experiments show aging is areversible process, capable of being driven “forwards and backwards at will,” said anti-aging expert David Sinclair, a professor of genetics in the Blavatnik Institute at Harvard Medical School and codirectorof the Paul F. Glenn Center for Biology of Aging Research.
Our bodies hold a backup copy of our youth that can be triggered to regenerate, said Sinclair, the senior author of a new paper showcasing the work of his lab and international scientists.
“We believe it’s a loss of information — a loss in the cell’s ability to read its original DNA so it forgets how to function — in much the same way an old computer may develop corrupted software. I call it the information theory of aging.”
Jae-Hyun Yang, a genetics research fellow in the Sinclair Lab who coauthored the paper, said he expects the findings “will transform the way we view the process of aging and the way we approach the treatment of diseases associated with aging.”
The epigenome literally turns genes on and off. That process can be triggered by pollution, environmental toxins and human behaviors such as smoking, eating an inflammatory diet or suffering a chronic lack of sleep. And just like a computer, the cellular process becomes corrupted as more DNA is broken or damaged, Sinclair said.
“The cell panics, and proteins that normally would control the genes get distracted by having to go and repair the DNA,” he explained. “Then they don’t all find their way back to where they started, so over time it’s like a Ping-Pong match, where the balls end up all over the floor.”
These mice are from the same litter. The one at right has been genetically altered to be old.
In other words, the cellular pieces lose their way home, much like a person with Alzheimer’s.
“The astonishing finding is that there’s a backup copy of the software in the body that you can reset,” Sinclair said. “We’re showing why that software gets corrupted and how we can reboot the system by tapping into a reset switch that restores the cell’s ability to read the genome correctly again, as if it was young.”
It doesn’t matter if the body is 50 or 75, healthy or wracked with disease, Sinclair said. Once that process has been triggered, “the body will then remember how to regenerate and will be young again, even if you’re already old and have an illness. Now, what that software is, we don’t know yet. At this point, we just know that we can flip the switch.”
Years of research
The hunt for the switch began when Sinclair was a graduate student, part of a team at the Massachusetts Institute of Technology that discovered the existence of genes to control aging in yeast. That gene exists in all creatures, so there should be a way to do the same in people, hesurmised.
To test the theory, he began trying to fast-forward aging in mice without causing mutations or cancer.
“We started making that mouse when I was 39 years old. I’m now 53, and we’ve been studying that mouse ever since,” he said. “If the theory of information aging was wrong, then we would get either a dead mouse, a normal mouse, an aging mouse or a mouse that had cancer. We got aging.”
With the help of other scientists, Sinclair and his Harvard team have been able to age tissues in the brain, eyes, muscle, skin and kidneys of mice.
To do this, Sinclair’s team developed ICE, short for inducible changes to the epigenome. Instead of altering the coding sections of the mice’s DNA that can trigger mutations, ICE alters the way DNA is folded. The temporary, fast-healing cuts made by ICE mimic the daily damage from chemicals, sunlight and the like that contribute to aging.
ICE mice at one year looked and acted twice their age.
Becoming young again
Now it was time to reverse the process. Sinclair Lab geneticist Yuancheng Lu created a mixture of three of four “Yamanaka factors,” human adult skin cells that have been reprogrammed to behave like embryonic or pluripotent stem cells, capable of developing into any cell in the body.
The cocktail was injected into damaged retinal ganglion cells at the back of the eyes of blind mice and switched on by feedingmice antibiotics.
“The antibiotic is just a tool. It could be any chemical really, just a way to be sure the three genes are switched on,” Sinclair told CNN previously. “Normally they are only on in very young, developing embryos and then turn off as we age.”
The mice regained most of their eyesight.
Next, the team tackled brain, muscle and kidney cells, and restored those to much younger levels, according to the study.
“One of our breakthroughs was to realize that if you use this particular set of three pluripotent stem cells, the mice don’t go back to age zero, which would cause cancer or worse,” Sinclair said. “Instead, the cells go back to between 50% and 75% of the original age, and they stop and don’t get any younger, which is lucky. How the cells know to do that, we don’t yet understand.”
Today, Sinclair’s team is trying to find a way to deliver the genetic switch evenly to each cell, thus rejuvenating the entire mouse at once.
“Delivery is a technical hurdle, but other groups seem to have done well,” Sinclair said, pointing to two unpublished studies that appear to have overcome the problem.
“One uses the same system we developed to treat very old mice, the equivalent of an 80-year-old human. And they still got the mice to live longer, which is remarkable. So they’ve kind of beaten us to the punch in that experiment,” he said.
“But that says to me the rejuvenation is not just affecting a few organs, it’s able to rejuvenate the whole mouse because they’re living longer,” he added. “The results are a gift and confirmation of what our paper is saying.”
What’s next? Billions of dollars are being poured into anti-aging, funding all sorts of methods to turn back the clock.
In his lab, Sinclair said his team has reset the cells in mice multiple times, showing that aging can be reversed more than once, and he is currently testing the genetic reset in primates. But decades could pass before any anti-aging clinicaltrials in humans begin, get analyzed and, if safe and successful, scaled to the mass needed for federal approval.
But just as damaging factors can disrupt the epigenome, healthy behaviors can repair it, Sinclair said.
“We know this is probably true because people who have lived a healthy lifestyle have less biological age than those who have done the opposite,” he said.
His top tips? Focus on plants for food, eat less often,get sufficient sleep, lose your breath for 10 minutes three times a week by exercising to maintain your muscle mass, don’t sweat the small stuff and have a good social group.
“The message is every day counts,” Sinclair said. “How you live your life even when you’re in your teens and20s really matters, even decades later, because every day your clock is ticking.” | Sinclair Lab geneticist Yuancheng Lu created a mixture of three of four “Yamanaka factors,” human adult skin cells that have been reprogrammed to behave like embryonic or pluripotent stem cells, capable of developing into any cell in the body.
The cocktail was injected into damaged retinal ganglion cells at the back of the eyes of blind mice and switched on by feedingmice antibiotics.
“The antibiotic is just a tool. It could be any chemical really, just a way to be sure the three genes are switched on,” Sinclair told CNN previously. “Normally they are only on in very young, developing embryos and then turn off as we age.”
The mice regained most of their eyesight.
Next, the team tackled brain, muscle and kidney cells, and restored those to much younger levels, according to the study.
“One of our breakthroughs was to realize that if you use this particular set of three pluripotent stem cells, the mice don’t go back to age zero, which would cause cancer or worse,” Sinclair said. “Instead, the cells go back to between 50% and 75% of the original age, and they stop and don’t get any younger, which is lucky. How the cells know to do that, we don’t yet understand.”
Today, Sinclair’s team is trying to find a way to deliver the genetic switch evenly to each cell, thus rejuvenating the entire mouse at once.
“Delivery is a technical hurdle, but other groups seem to have done well,” Sinclair said, pointing to two unpublished studies that appear to have overcome the problem.
“One uses the same system we developed to treat very old mice, the equivalent of an 80-year-old human. And they still got the mice to live longer, which is remarkable. So they’ve kind of beaten us to the punch in that experiment,” he said.
“But that says to me the rejuvenation is not just affecting a few organs, it’s able to rejuvenate the whole mouse because they’re living longer,” he added. | yes |
Gerontology | Can stem cell therapy reverse aging? | yes_statement | "stem" "cell" "therapy" can "reverse" "aging".. "aging" can be "reversed" through "stem" "cell" "therapy". | https://www.aging-us.com/article/204896/text | Chemically induced reprogramming to reverse cellular aging | Aging | Abstract
A hallmark of eukaryotic aging is a loss of epigenetic information, a process that can be reversed. We have previously shown that the ectopic induction of the Yamanaka factors OCT4, SOX2, and KLF4 (OSK) in mammals can restore youthful DNA methylation patterns, transcript profiles, and tissue function, without erasing cellular identity, a process that requires active DNA demethylation. To screen for molecules that reverse cellular aging and rejuvenate human cells without altering the genome, we developed high-throughput cell-based assays that distinguish young from old and senescent cells, including transcription-based aging clocks and a real-time nucleocytoplasmic compartmentalization (NCC) assay. We identify six chemical cocktails, which, in less than a week and without compromising cellular identity, restore a youthful genome-wide transcript profile and reverse transcriptomic age. Thus, rejuvenation by age reversal can be achieved, not only by genetic, but also chemical means.
Introduction
All life depends on the storage and preservation of information. In eukaryotes, there are two main repositories of information: the genome and the epigenome. Though these information repositories work interdependently to coordinate the production and operation of lifeâs molecular machinery, they are different in fundamental ways. Genetic information is digital and largely consistent across all cells in the body throughout an individualâs lifespan. In contrast, epigenetic information is encoded by a less stable digital-analog system, varying between cells and changing in response to the environment and over time.
At least a dozen âhallmarks of agingâ are known to contribute to the deterioration and dysfunction of cells as they age [1, 2]. We and other researchers have gathered compelling evidence, from yeast to mammals, supporting the idea that a loss of epigenetic information, resulting in changes in gene expression, leads to the loss of cellular identity [3â7]. These findings are consistent with the Information Theory of Aging, which proposes that a decline in information, specifically epigenetic information, triggers a cascade of events, including mitochondrial dysfunction, inflammation, and cellular senescence [5, 7â9], leading to a progressive decline in cell and tissue function, manifesting as aging and age-related diseases. We have previously shown in mice that cell injuries, such as DNA double-strand breaks and cell crushing, promote epigenetic information loss, which can lead to what appears to be an acceleration of aging and age-related disease [7, 9].
Cellular senescence is a state of permanent cell cycle arrest that facilitates wound repair, tissue remodeling, and avoidance of cancer by halting proliferation in aged and damaged cells [10, 11]. Senescence is associated with alterations in cell morphology, chromatin architecture, and the release of inflammatory factors in a process referred to as the senescence-associated secretory phenotype (SASP). The transition to cellular senescence can be initiated by a loss of epigenetic information, as well as telomere shortening, irreparable DNA damage, and cytoplasmic DNA [7, 10-12]. The accumulation of senescent cells with age promotes inflammation and generates additional reactive oxygen species (ROS), both locally and across the organism, contributing to a broad range of age-related diseases, from macular degeneration, to increased blood pressure, to metabolic dysregulation [13].
Starting in 1962, Gurdon and others demonstrated that nuclei contain the necessary information to generate new individuals with normal lifespans [14â16]. In 2006, Takahashi and Yamanaka demonstrated that the expression of four transcription factors, OCT4, SOX2, KLF4, and c-MYC (collectively known as âOSKMâ), reprograms the developmental potential of adult cells, enabling them to be converted into various cell types [17, 18]. These findings initiated the field of cell reprogramming, with a string of publications in the 2000s showing that the identity of many different types of adult cells from different species could be erased to become induced pluripotent stem cells, commonly known as âiPSCsâ [17, 19â21].
The ability of the Yamanaka factors to erase cellular identity raised a key question: is it possible to reverse cellular aging in vivo without causing uncontrolled cell growth and tumorigenesis? Initially, it didnât seem so, as mice died within two days of expressing OSKM. But work by the Belmonte lab, our lab, and others have confirmed that it is possible to safely improve the function of tissues in vivo by pulsing OSKM expression [22, 23] or by continuously expressing only OSK, leaving out the oncogene c-MYC [7, 8]. In the optic nerve, for example, expression of a three Yamanaka factor combination safely resets DNA methylomes and gene expression patterns, improving vision in old and glaucomatous mice via a largely obscure mechanism that requires TET DNA demethylases [8]. Numerous tissues, including brain tissue, kidney, and muscle, have now been reprogrammed without causing cancer [7, 8, 22, 24, 25]. In fact, expression of OSK throughout the entire body of mice extends their lifespan [26]. Together, these results are consistent with the existence of a âback-up copyâ of a youthful epigenome, one that can be reset via partial reprogramming to regain tissue function, without erasing cellular identity or causing tumorigenesis [7â9].
Currently, translational applications that aim to reverse aging, treat injuries, and cure age-related diseases, rely on the delivery of genetic material to target tissues. This is achieved through methods like adeno-associated viral (AAV) delivery of DNA and lipid nanoparticle-mediated delivery of RNA [7, 8, 27]. These approaches face potential barriers to them being used widely, including high costs and safety concerns associated with the introduction of genetic material into the body. Developing a chemical alternative to mimic OSKâs rejuvenating effects could lower costs and shorten timelines in regenerative medicine development [26, 28â31]. This advancement might enable the treatment of various medical conditions and potentially even facilitate whole-body rejuvenation [32, 33].
In this study, we developed and utilized novel screening methods including a quantitative nucleocytoplasmic compartmentalization assay (NCC) that can readily distinguish between young, old, and senescent cells [34, 35]. We identify a variety of novel chemical cocktails capable of rejuvenating cells and reversing transcriptomic age to a similar extent as OSK overexpression. Thus, it is possible to reverse aspects of aging without erasing cell identity using chemical rather than genetic means.
Results
Nucleocytoplasmic compartmentalization (NCC) is disrupted in fibroblasts from old individuals and senescent cells
To identify small molecules that ostensibly reverse the effects of aging and senescence, we developed an efficient high-throughput system. Rather than relying on a limited set of genes that exhibit age-related changes and to ensure reliability and applicability across various cell types, we sought to develop an age-dependent assay that acted as a surrogate for cellular health and youthful gene expression patterns. To increase scalability and ease of use, we sought a fluorescence-based system that could be quantified in millions of cells per experiment via automated microscopy.
One of the most well-conserved physiological hallmarks of aging is deterioration of nucleocytoplasmic compartmentalization (NCC), which can be visualized as the leakage of nuclear proteins into the cytoplasm and failure of proteins to be imported into the nucleus [34, 35]. In neurons and astrocytes directly converted from fibroblasts of old humans, as well as old nematodes and rat brain tissue, the nuclear pore complex is deteriorated, leading to increased nuclear permeability and cytosolic protein aggregation [34â36].
To monitor age-associated alterations in nuclear permeability, we introduced the NCC reporter system into human fibroblasts from a 22-year-old donor (Figure 1A). mCherry and eGFP were linked to nuclear localization signal (NLS) and nuclear export signal (NES), respectively. In healthy young fibroblasts, the cellular localization of these proteins distinctly separated, whereas in fibroblasts from either a 94-year-old donor or from a 14-year-old Hutchinson-Gilford progeria syndrome (HGPS) patient, the number and intensities of cytoplasmic mCherry puncta were higher than in fibroblasts from the normal 22-year-old donor (Supplementary Figure 1). Despite the difference, Z-factor analysis indicated that the system was not sufficiently robust for large-scale screening purposes, leading us to seek an alternative [37].
Cellular senescence is accompanied by substantial reorganization of the nuclear envelope and a breakdown in nucleocytoplasmic trafficking, including altered expression and degradation of Lamin B1, and the formation of cytoplasmic chromatin fragments (CCFs) [38â45]. Thus, we reasoned that senescent fibroblasts might produce a strong signal in the NCC reporter system, one that could be used for the screening of molecules to reverse epigenetic aging. Senescence can be induced in a variety of ways, including telomere erosion, oncogene expression, and DNA damage [13, 46]. Because replicative senescence advances the DNA methylation clock but DNA damage-induced senescence does not [46, 47], we reasoned that replicatively senescent cells might be more robust and reliable to find epigenetic age reversal cocktails than other types of senescent cells.
To avoid unintended false-rejuvenation effects caused by the expansion of a small percentage of replication-capable cells in the senescent population, all experiments were performed in low serum conditions that completely suppressed cell division [48]. In non, senescent, quiescent control fibroblasts, the mCherry and eGFP signals were clearly distinguishable (Figure 1Bâ1D). Senescent fibroblasts were generated by passaging ~40 times, each time with a 1:3â1:5 dilution with fresh media, until there was a complete absence of growth for two weeks, morphology changes characteristic of senescent cells, a dramatic increase in transcripts from the cell-cycle regulator p21 (CDKN1A), and other senescence-associated changes in gene expression (Supplementary Figure 2B, 2C). In the senescent fibroblasts, mCherry was aggregated in the cytoplasm and colocalized with eGFP (Figure 1Eâ1G), consistent with a previous report [34, 35]. Colocalization of the signals, as measured by Pearson correlation, was significantly higher in replicatively senescent cells compared to quiescent cells (Figure 1H). These experiments indicated that the NCC system could discern non-senescent from replicatively senescent cells, essentially in real time.
Reversal of characteristics of cellular senescence by epigenetic reprogramming
To assess the applicability of the NCC system for detecting interventions that restore youthful functions and gene expression patterns, we first tested whether it could detect the effects of genetically mediated epigenetic age reversal. Ectopic expression of Yamanaka factors OCT4, SOX2, and KLF4 (OSK) restores youthful gene expression patterns, epigenetic age, and youthful functions to old cells and tissues [7, 8]. Our previously published reverse tetracycline-controlled transactivator (rtTA) module and the polycistronic OSK cistron under the control of a tetracycline-inducible promoter (Tet-on OSK) were transduced using lentivirus to create stable cell lines from the human fibroblasts and passaged until they reached replicative senescence. Treatment with doxycycline was sufficient to activate the OSK cassette in these fibroblasts (Supplementary Figure 2A).
Transcriptomic changes are involved in driving an aging-related decline in function and provide effective biomarkers for predicting biological and chronological age [46, 47]. To verify if these phenotypic changes reflected a more youthful epigenetic signature, we analyzed the transcriptional profile by genome-wide RNAseq. A comparison of quiescent young to quiescent old cells identified 190 genes that were significantly upregulated, and 326 genes that were significantly downregulated. Induction of OSK for four days led to reduced expression in 43.2% (82) of age-upregulated genes and increased expression in 65.3% (213) of age-downregulated genes (Figure 2Aâ2D and Supplementary Figure 2B). In all, nearly half of the genes changed by senescence were restored by OSK expression (Figure 2B, 2D, Supplementary Figure 2D, 2E). This finding is consistent with our previous findings and those of others, that expressing OSK in a variety of cell types and tissues, including human and mouse fibroblasts, can substantially restore the epigenetic landscape and gene expression patterns of old cells [7, 8, 26]. We call this process the EPOCH method, for epigenetic programming of old cell health.
Gene ontology (GO) analysis indicated that the top 20 GO biological processes of upregulated genes encompassed key features of aging, including dysregulation of development, localization, and transport [7], eleven of which were reversed by OSK (Figure 2E). Despite the absence of cell division in all conditions, senescence caused subtle but significant changes in cell cycle gene mRNA levels, including p21 (Supplementary Figure 2C) [49]. Numerous cell cycle- related processes were enriched with downregulated genes by senescence, and 19 of the top 20 were reversed by OSK expression (Figure 2F). The net outcome of this was the demonstration that induction of OSK partially counteracts the aging related changes resulting from senescence.
Using the NCC system, we examined the deterioration of nucleocytoplasmic integrity as cells transitioned from quiescence to senescence and the rejuvenative effects of OSK treatment on those senescent cells (Figure 2G, 2H). Cross-sectional intensity profiles of the cells were used to assess the correlation between distributions of fluorescent molecules (Figure 2I). Compared to quiescent cells, senescent cells had a significant increase in the aggregation of mCherry and eGFP, indicating disruption of nucleocytoplasmic integrity (Figure 2J). After four days of OSK treatment, NCC integrity was significantly restored in senescent cells, comparable to the quiescent, non-senescent cell population (Figure 2J). Taken together, these data show that OSK-mediated epigenetic reprogramming substantially reverses senescence-associated pathology and transcriptomic changes and that the NCC reporter system can detect rejuvenation of senescent cells by OSK.
Reversal of senescence-associated NCC changes by reprogramming small molecules
To identify small molecules that rejuvenate old and senescent cells, we curated a list of molecules that have successfully reprogrammed human and mouse somatic cells into chemically induced pluripotent stem cells (CiPSCs) [30, 31] and tested them using the NCC assay. Again, we used fully senescent cells to avoid detecting changes due to the cell cycle or transition to senescence. Epigenetic age reversal is known to occur within a week of OSK (M)-mediated reprogramming, while the epigenetic age continuously decreases until pluripotency, reaching an approximate age of zero [50â52]. To ensure consistency, we initially tested small molecule combinations on cells within the same four-day period required for OSK to rejuvenate cells safely and consistently.
To achieve age reduction without altering cell identity, we focused on small molecules that were likely to work in the early stages of CiPSC formation, including valproic acid (V), CHIR-99021 (C), E-616452 (6), tranylcypromine (T) and forskolin (F). Previous studies of reprogramming efficiency with small molecules demonstrated that either OCT4 alone or SKM, when combined with VC6T or F, respectively, can generate iPSCs, and VC6TF facilitates a mesenchymal-to-epithelial transition, an early stage of reprogramming in mouse cells [31, 53]. Because of the known differences in differentiation between mice and humans, we also investigated molecules that have been reported for the initiation states of generating human CiPSCs including CHIR-99021 (C), E-616452 (6), TTNPB (N), Y-27632 (Y), Smoothened Agonist (S), and ABT-869 (A) [30]. The molecules VC6TF (Cocktail 1: C1) and C6NYSA (Cocktail 4: C4) were used as basal reprogramming cocktails and supplemented with other boosters known to increase iPSC efficiency, including sodium butyrate, basic fibroblast growth factor (bFGF), and alpha ketoglutarate (α-KG) (Figure 3A, 3B, Supplementary Tables 1 and 2) [54].
Based on the fact that iPSCs can also be generated using SKM or O alone [55, 56], we evaluated the effect of the boosters on VC6T (SKM alternative) and F (O alternative). We also assessed the effect of combinations including C6N, because it has been reported that the removal of Y, S, or A from Cocktail 4 (C6NYSA) did not reduce the CiPSC efficiency [30]. Among 80 cocktails tested in the NCC assay, the VC6TF basal cocktail was the most effective at restoring the integrity of nucleocytoplasmic compartmentalization, a key sign of age-reversal (Figure 3B). A recent, unpublished study reported that 6T pre-treatment prevents senescence in human fibroblasts, and 6, T, or 6T extends the lifespan of Caenorhabditis elegans by up to 42.1% [57]. We, however, saw no benefit of F alone or the VC6T cocktail on reversing senescence phenotypes in our system (Figure 3B). Next, we chose six cocktails of small molecules for further investigation, three of which were based on Cocktail 1 as well as two additives (referred to as Cocktail 2 and 3) and the other three based on Cocktail 4 plus additional additives (referred to as Cocktail 5 and 6) (Supplementary Table 2). Sodium butyrate, a histone deacetylase inhibitor, was one of the most effective additives in both human and mouse cocktails (C2 and C5). Basic fibroblast growth factor (bFGF) was used for Cocktail 3, while α-KG was included in Cocktail 6. To better gauge the effect of these compounds on NCC integrity, we used Pearsonâs correlation to assess the distribution of fluorescent proteins (Figure 3C, 3D). The six cocktails statistically improved compartmentalization in senescent cells, both in terms of correlation analysis (Figure 3C) and imaging of NCC signaling (Figure 3D).
For nearly two decades, the writing and maintenance of chromatin marks have been known to be critical for reprogramming [58]. For this reason, we incorporated inhibitors of established chromatin remodeling factors in our screen to investigate whether these factors represented barriers or essential drivers of rejuvenation. The rejuvenation pathway(s) initiated by C1 and C4 were both blocked by the inhibition of H3K9 methyltransferase G9a (BIX01294, 0.5 μM) and TGF-β (SB431542, 10 μM), however they were not disrupted when the H3K27 methyltransferase component of PRC2, EZH2, was inhibited (DZNep, 20 nM) (Figure 3B).
Small molecules can reverse the age of the transcriptome with no loss of cell identity
Based on the improvement in NCC integrity, we performed RNA-seq to test the effect of these six cocktails on transcriptomic age. After treatments with the chemicals, we observed a strong overlap between genes affected by the chemical treatments and the switch from quiescence to senescence (Supplementary Figure 3A). We also observed that the two groups of cocktails generally perturbed the same populations of genes (Supplementary Figure 3A). Treatment with the chemical cocktails did not lead to fibroblasts taking on non-specific cell identity markers (Supplementary Figure 3B). Finally, we did not observe the expression of iPSC specific genes or gene modules in the RNA-seq datasets (Supplementary Figure 3C, 3D). Additionally, we performed immunofluorescence looking for signs of expression of pluripotency-related genes such as NANOG and EPCAM following all cocktail treatments but could not see any expression (Supplementary Figure 4). Collectively, these data indicate that chemical-mediated treatments are only partially reprogrammed and not fully reset to pluripotency.
We then tested the effect of these six cocktails on the transcriptomic age (tAge) of the cells using clocks trained on mouse, human, and a combined training data set 52. Relative transcriptional age was assessed using a rodent transcriptomic clock as well as a combined human and rodent transcriptomic clock (Figure 4A, 4B). The change in years of age was determined using a human-specific chronological clock (Figure 4C). Compared to quiescent cells, senescent cells had a significant increase in transcriptomic age, based on the transcriptomic clocks, consistent with previous findings assessing DNA methylation age [46, 47, 59]. Treatment of NCC cells with each of the six chemical cocktails (C1-6) resulted in statistically significant reduction of the transcriptomic age of senescent cells, with those originating from mouse studies (C1-3) generally producing a greater decrease in transcriptional age relative to the human derived cocktails (Figure 4A, 4B). The reported magnitude of the effect of all six cocktails differed between the hybrid and rodent transcriptional clocks, with the hybrid clock indicating a greater decrease in age by all six cocktails, with the rodent clock showing less variability between treatments.
All six reprogramming cocktails also significantly decreased the estimated chronological age of NCC senescent cells by several years (Figure 4C). As observed with clock-based transcriptional age estimates, C1, C2, and C3 produced the greatest effect, reducing the measured age by more than three years after only four days of treatment. For reference, the effect of this four day treatment is comparable to the total change seen after a year of a regenerative treatment described in a landmark study from 2019, which also focused on restoring epigenetic information [60].
To understand the effect of the chemical cocktails on cell identity and function, we assessed overall gene expression patterns of chemically-treated cells and compared them to old human cells [61] and OSK(M)-induced mouse and human induced pluripotent stem cells (iPSC) [52]. We expressed the correlation in gene expression between groups as a heatmap of the Spearmanâs ranked correlation (Figure 4D). Despite having different chemical components, the transcriptomic profile of all six cocktails grouped most closely together, with human C4-6 and mouse C1-3-derived cocktails grouping more closely within their groups (Figure 4D). All six of the chemical treatments were positively correlated with the induced pluripotent stem cell (iPSC) populations and were negatively associated with mammalian age-related changes occurring in specific organs, such as kidney and brain, as well as across multiple tissues of mice, rats, and humans. In agreement with the transcriptomic clock analysis, mouse C1-3-derived cocktails produced a more consistent and stronger anti-aging effect on the cellular transcriptome than the human cocktails (C4-6).
Next, we performed gene set enrichment analysis (GSEA) to identify which pathways might be responsible for the similarities and differences between the chemical treatments, signatures of aging, and OSK(M)-induced iPSCs. The KEGG genes database, HALLMARK gene set collection, and Reactome pathways database were included in this analysis (Figure 4E). The anti-aging effects of chemical cocktails, especially mouse-derived ones, were associated with the upregulation of respiration associated pathways, such as oxidative phosphorylation and mitochondrial translation, as well as downregulation of hypoxia and multiple inflammation-associated pathways, such as interferon and JAK-STAT signaling, which are known to be involved in the SASP. The activation of JAK-STAT signaling by interferons and other SASP factors, for example, contributes to the complex interplay between senescent cells and their microenvironment. Together, these data show that the chemical cocktails identified in this study not only reverse the effects of senescence on NCC and make them transcriptionally younger, but they also reverse key transcriptional signatures of senescence (Figure 4F).
Discussion
In this study, we provide evidence, based on protein compartmentalization and gene expression patterns in young and senescent cells, that small molecules can reverse the transcriptomic age of cells without erasing cell identity or inducing iPSC-like states. We refer to this approach as the EPOCH method.
The effectiveness of the NCC system as an apparent surrogate biomarker for biological age reversal, with young, old, senescent, HGPS, and OSK-treated cell lines serving as controls, should set the stage for larger, more expansive screens for rejuvenation factors. Follow-up studies are underway to elucidate the cellular machinery that mediates these rejuvenative effects, with an emphasis on the mechanisms by which cells apparently write then later read a âbackup copyâ of earlier epigenetic information to reset chromatin structures and reestablish youthful gene expression patterns.
Disruption of NCC is a well-established effect of aging across species and is directly associated with other diseases, including amyotrophic lateral sclerosis (ALS) and frontotemporal dementia (FTD). This study shows that the expression of OSK results in a noticeable improvement in the integrity of nucleocytoplasmic compartmentalization in replicatively senescent cells. Further study into how EPOCH chemical cocktails restore NCC integrity and the partitioning of proteins may therefore offer therapeutic avenues for improving the health of older individuals and patients with age-related diseases of specific cell types and tissues. The nervous system is one example where the presence of healthy NCC is crucial for the proper functioning of tissue, and it is often affected in diseases related to aging [34â36, 62, 63]. Other methods of age control, such as the recently published Inducible Changes in the Epigenome (ICE), which has the ability to accelerate normal epigenetic aging both in vitro and in vivo, should aid in such studies [5, 7].
Transcriptomic analyses of epigenetic reprogramming by OSK and rejuvenation cocktails indicate that these interventions broadly ameliorate features of senescence, as illustrated by the striking changes in senescence-associated gene expression patterns involving inflammation, mitochondrial metabolism, lysosomal function, apoptosis, p53, and growth signaling. Furthermore, the observation from the transcriptomic clocks that all six chemical cocktails, C1-C6, decreased both biological and chronological age below that of even the non-senescent cell populations, indicates that the cocktails are potent and capable of reversing senescence-associated cellular dysfunction. Despite the differences in the composition of mouse- and human-based chemical cocktails, both affected mostly the same grouping of genes, suggesting that the effects may be operating through shared pathways.
Experiments are in progress to understand the effect of the cocktails on various cell types from young and old individuals, the results of which will inform us about the extent to which they parallel the broadly beneficial effects of OSK(M) on cells and tissues. The chemical cocktail that induced the most potent rejuvenation was VC6TF. Given that VC6TF has not been reported to be capable of fully reprogramming human cells to CiPSCs, and the maximum duration of any chemical treatment was limited to only four days, this study substantiates the notion that the rejuvenation is inherent to early phases of reprogramming and is at least partially separable from pluripotency programs [7, 8].
To fully understand how chemical epigenetic age reversal works, it will be important to identify the factors and interactions responsible and compare them to those triggered by expression of OSK. Do they work via transcription factors, OCT4, SOX2 and KLF4, or are they initiating an independent program? Additional work is also needed to determine which regulators of chromatin and transcription are involved, such as the TET enzymes, PRC1/2, and HDACs. The results from this study, and those in progress, suggest that some, but not all, of the rejuvenation mechanisms are shared between the two modes of partial reprogramming. Given that BIX01294, a G9a histone methyltransferase inhibitor, can promote full reprogramming and the formation of iPSCs, it may be that chemical rejuvenation relies on distinct pathways that establish new H3K9me1/2 marks [64]. G9a has not been extensively studied in the context of aging, except for a report citing an age-related decrease in its associated marks in certain tissues [65].
CHIR99021 is a GSK3α/β inhibitor, an effective inducer of CiPSCs and promoter of certain stem cell characteristics [66, 67]. E-616452, also known as RepSox, is a TGF-β inhibitor that has been used in experiments to replace SOX2 during epigenetic reprogramming [68, 69]. All the efficacious reprogramming chemical cocktails included these compounds, suggesting that these components together are potent contributors to the cellular rejuvenation in the treated cell populations. Various research groups have observed that chemical cocktails containing CHIR99021 and E-616452 can induce direct reprogramming between differentiated cell states [70, 71]. This is important because it suggests that the processes involved in both rewriting and replacing cellular epigenetic identity are affected by the additive effects of these chemical compounds. Moreover, independent studies have found associations with individual chemicals and reprogramming in various contexts, indicating that each component likely contributes to rejuvenation through a broad range of mechanisms [54, 72].
Valproic acid is a well-known broad-spectrum histone deacetylase inhibitor that leads to a rapid and dramatic spread of histone acetylation marks across the genome [73]. The fact that valproic acid is a critical component of many of the successful cocktails indicates that the spread of euchromatin may be an important component of partial epigenetic reprogramming [73]. Sodium butyrate is another histone deacetylase inhibitor that was effective in both human and mouse cocktails. It has been reported to improve the expression of genes associated with reprogramming, supporting the model that the regulation of histone acetylation marks is crucial for rejuvenation via reprogramming [54]. The final chemical in our most efficacious C1 cocktail, forskolin, is an activator of adenylyl cyclase that has been shown to drive reprogramming and trans differentiation, depending upon the combination of other compounds present [74, 75]. While the mechanism of action of forskolin in the context of rejuvenation remains to be identified, increasing cellular levels of cAMP and the triggering of signal cascades that are critical for adaptations in cell identity may be key.
This study focused on physiological rejuvenation and analysis of specific and well-established epigenomic signatures of aging. Whether chemical reprogramming can attenuate or reverse other hallmarks of aging and how effective it is on non-senescent cells and different cell types, tissues, and species, requires additional exploration. Experiments are in progress to determine the persistence of the rejuvenative effect after reprogramming concludes and the mechanisms by which chemical EPOCH (cEPOCH) works.
Although the potential of these and other combinations of chemicals to achieve cEPOCH is great, from treating blindness to liver failure and skin damage, in light of the toxic effects of expressing all four Yamanaka factors in mice [22], it is critical that the safety of chemical rejuvenation cocktails is tested rigorously in mammalian animal models before human trials are initiated. Although transcriptomic analysis did not indicate any developing pluripotency, based on the absence of mRNA for pro-tumorigenic genes such as NANOG and by RNA-seq analysis looking for pluripotency signatures, the only way to assess the full safety of these and other rejuvenative cocktails is to test their effects in multiple animal models, paying particular attention to signs of tissue dysplasia or cancer. To date, our experiments with genetic and chemical rejuvenation methods indicate that cells possess a barrier to becoming too young or completely losing their identity like iPSCs created using OSKM. Understanding this putative barrier would also speed the identification and development of improved age reversal methods.
The observation that genetic and chemical rejuvenation of cells is possible, restoring earlier gene expression patterns while retaining cellular identity, indicates that old cells possess information to reset their biological age, consistent with the Information Theory of Aging. Identifying how this putative information is encoded and where it resides will greatly speed the development of increasingly effective approaches to rejuvenate cells.
Future work will be directed to understanding how long the effects of these and other EPOCH treatments last in vivo and whether they reverse aspects of aging and extend lifespan in mice, paralleling treatment with AAV-OSK [7, 8, 26]. The assays developed in this study, combined with robotics and the increasing power of artificial intelligence, will facilitate increasingly larger screens for genes, biologics, and small molecules that safely reverse mammalian aging, and, given that aging is the single greatest contributor to human disease and suffering, these advances cannot come soon enough.
Materials and Methods
Cell culture
Human fibroblasts derived from a healthy 22-year-old, 94-year-old, and a 14-year-old with HGPS were obtained from the Coriell Institute (GM23976, AG08433, and AG11498). These cells were cultured in DMEM supplemented with 20% fetal bovine serum (FBS), 1% penicillin-streptomycin (P/S), and 0.1 mM β-mercaptoethanol (β-ME). For Tet-On cells, the medium composition was adjusted to include 15% tetracycline (Tet)-free FBS instead of the usual 20% FBS. To induce replicative senescence, fibroblasts were passaged until their growth ceased completely for at least two weeks. Senescence was confirmed through various assessments, including analysis of cell morphology, cell size, and gene expression of gold-standard senescence markers.
The human iPSC (hiPSC) line AG27602 (Coriell Institute) was used as a positive control for staining of iPSC cell markers and was cultured in mTeSR⢠Plus (Stem Cell Technologies, #100-1130) according to manufacturer guidelines.
Generation of stable cells
To generate NCC-stable cells, we used FugeneHD (Promega, E2311) for transfection of pLVX-EF1alpha-2xGFP:NES-IRES-2xRFP:NLS (addgene, #71396), psPAX2 (addgene, #12260), and pMD2.G (addgene, #12259) into 293FT cells following the provided instructions. The 293FT cell medium was collected two and four days post-transfection and filtered through a 0.45-micron filter. To facilitate transduction, the collected medium was combined in a tube, concentrated using the Lenti-X⢠Concentrator (Takara Bio, #631231), and added to human fibroblast medium with polybrene at a concentration of 5 μg/ml. After 24 hours, the medium was replaced with fresh medium. Following approximately one week, a subset of fibroblasts showing stable expression of mCherry and GFP were observed, and NCC-positive cells were sorted using the BD FACSAria system.
For cloning of the pLVX-UbC-rtTA-hOSK-Neo vector, the Ngn2:2A:EGFP and PuroR cassettes on pLVX-UbC-rtTA-Ngn2:2A:EGFP (addgene, #127288) were swapped with the hOCT4:2A:hSOX2:2A:hKLF4 and NeoR cassettes, respectively. NCC fibroblasts were transduced with hOSK lentivirus using the same procedure as mentioned earlier to achieve stable hOSK expression. Two days post lentiviral transduction, the cells were cultured in DMEM supplemented with 15% Tet-free FBS, 1% P/S, 0.1 mM β-ME, and 200 μg/ml G418. To induce hOSK expression, senescent fibroblasts were treated with doxycycline (2 μg/ml) for stated periods.
Small molecule treatment
The small molecules were dissolved in suitable solvents and carefully stored according to the recommended conditions (Supplementary Table 1). To prepare for the small molecule treatment, the growth medium was changed to a low serum medium with 1% FBS, a day prior to the treatment. Fresh low serum medium was used to prepare the small molecule solution, which was thoroughly mixed before replacing the old medium in the dish. The medium containing the small molecules was completely refreshed every other day until the cells were harvested. To evaluate the alterations in NCC signals resulting from the small molecule treatment, NCC images were captured using the Cytation C10 (Agilent) imaging system. NCC correlation was calculated using Cellprofiler® colocalization analysis.
Immunofluorescence
Cells were fixed in 3.7% paraformaldehyde (PFA) for 15 minutes and washed three times with 1X PBS. Then cells were permeabilized in 0.1% Triton X-100 in PBS followed by 30 minutes of blocking with 1% bovine serum albumen (BSA) in PBS+ 0.1% Tween-20 (PBST) + 22.52 mg/mL glycine. Primary antibodies were used at the following concentrations in 1% BSA in PBST: NANOG (Invitrogen, PA5-85110) 1:200, and EPCAM (Abcam, ab71916) 1:100. Primary antibodies were incubated for 1 hour at room temperature followed by three washes of PBS. Then secondary antibodies were used at 1:1000 in 1% BSA in PBST (Goat anti-rabbit Alexa Fluor⢠647, Invitrogen A-21244 or Goat anti-rabbit Alexa Fluor⢠488, Invitrogen A-11008), incubated for one hour, and followed by three washes of PBS. Nuclear counterstaining was performed for 15 minutes using Hoechst 33342 (1:2000 in PBS) followed by a final three washes with 1X PBS. Staining was assessed by 10X wide field fluorescence imaging using the IXM-LZR and processed using Metaxpress and ImageJ.
RNA sequencing and analysis
RNA was harvested from cells using Omega ENZA Total RNA kit and assessed for quality and integrity using an Agilent Tapestation. Library preparation and 150 bp paired-end sequencing was performed on an Illumina Novaseq by Novagene. Fastq read files were processed using FastQC. Illumina adapters were removed using TrimGalore! (Version 0.4.0, Babraham Bioinformatics), and aligned to the mm10 genome using Hisat2 (Version 2.2.1) [76]. Aligned reads were assembled using StringTie (Version 1.3.3b) [77], and expression level and transcripts were estimated. Differential expression was determined using DEseq2 [78], with FDR < 0.05.
Signature association analysis
Association of gene expression log-fold changes induced by chemical C1-6 cocktails in human fibroblasts with established transcriptomic signatures of mammalian aging and OSK (M)-induced iPSCs was examined with Spearman correlation method as described previously [61]. The utilized signatures of aging included tissue-specific liver, kidney, and brain signatures as well as multi-tissue signatures of the mouse, rat, and human [61]. Signatures of OSKM reprogramming included genes differentially expressed during cellular reprogramming of mouse fibroblasts (mouse), and shared transcriptomic changes during OSK(M)-induced reprogramming of mouse and human fibroblasts (mouse and human) [47]. Pairwise Spearman correlations for gene expression changes induced by chemical cocktails and transcriptomic signatures of aging and OSK (M) reprogramming were calculated based on the union of top 300 genes with the lowest p-value for each pair of signatures.
For the identification of enriched functions affected by chemical cocktails, we performed functional GSEA [79] on a pre-ranked list of genes or proteins based on log10 (p-value) corrected by the sign of regulation, calculated as:
logâ(pv)âÃâsgnâ(lfc)
where pv and lfc are p-value and logFC of a certain gene, respectively, obtained from edgeR output, and sgn is the signum function (equal to 1, â1 and 0 if value is positive, negative, or equal to 0, respectively). HALLMARK, KEGG, and REACTOME ontologies from the Molecular Signature Database (MSigDB) were used as gene sets for GSEA. The GSEA algorithm was performed separately for each cocktail via the fgsea package in R with 5000 permutations. P-values were adjusted with Benjamini-Hochberg method. An adjusted p-value cutoff of 0.1 was used to select statistically significant functions. A similar analysis was performed for gene expression signatures of aging and OSK (M) reprogramming.
Transcriptomic clock analysis
To assess the transcriptomic age (tAge) of fibroblasts treated with chemical cocktails, we applied multi-tissue chronological human, lifespan-adjusted biological rodent (mouse + rat) and hybrid (mouse + rat + human) transcriptomic clocks based on the identified gene expression signatures of aging [52]. For data preprocessing, filtered RNAseq count data was passed to log transformation and scaling. The missing values corresponding to clock genes not detected in the data were imputed with the precalculated average values. Estimated sample tAges were centered around median tAge of control quiescent cells. Pairwise differences between average tAges of senescent untreated cells and either quiescent cells or senescent cells treated with C1-6 cocktails were assessed using independent t-tests. Resulting p-values were adjusted with the Bemjamini-Hochberg method.
iPSC profiler
In order to validate that the cells treated with chemical reprogramming cocktails did not lose their fibroblast cell type identity, transcriptome of all samples were compared against that of human stem cells using SEQUIN iPSC Profiler, as previously described [80].
Acknowledgments
We wish to thank Bruce Ksander for his advice and support and members of the Sinclair lab, in particular Hudson Eaton, Sally Tabakh, Callahan Rogers, Ayame Bluebell, Juliana Vasquez, and Luis Rajman for their input, guidance, and support. This paper is dedicated to Jezebel and Gizmo Poon.
Ethical Statement
We have read and followed the COPE Best Practice Guidelines. No animal work was performed in this research study.
Funding
This research was supported by grants from the Hoff Foundation, The Glenn Foundation for Medical Research, Hoff Foundation, and NIH/NIA (R01AG019719). V.N.G. is supported by grants from the NIA. X.T. was supported by NIH/NIA (K99AG068303). J-H.Y. was supported by National Research Foundation of Korea 2012R1A6A3A03040476.
By using our site you are giving us permission to use cookies. This website collects cookies to deliver a better user experience, and to analyze our website traffic and performance. Personal data is not collected. | The ability of the Yamanaka factors to erase cellular identity raised a key question: is it possible to reverse cellular aging in vivo without causing uncontrolled cell growth and tumorigenesis? Initially, it didnât seem so, as mice died within two days of expressing OSKM. But work by the Belmonte lab, our lab, and others have confirmed that it is possible to safely improve the function of tissues in vivo by pulsing OSKM expression [22, 23] or by continuously expressing only OSK, leaving out the oncogene c-MYC [7, 8]. In the optic nerve, for example, expression of a three Yamanaka factor combination safely resets DNA methylomes and gene expression patterns, improving vision in old and glaucomatous mice via a largely obscure mechanism that requires TET DNA demethylases [8]. Numerous tissues, including brain tissue, kidney, and muscle, have now been reprogrammed without causing cancer [7, 8, 22, 24, 25]. In fact, expression of OSK throughout the entire body of mice extends their lifespan [26]. Together, these results are consistent with the existence of a âback-up copyâ of a youthful epigenome, one that can be reset via partial reprogramming to regain tissue function, without erasing cellular identity or causing tumorigenesis [7â9].
Currently, translational applications that aim to reverse aging, treat injuries, and cure age-related diseases, rely on the delivery of genetic material to target tissues. This is achieved through methods like adeno-associated viral (AAV) delivery of DNA and lipid nanoparticle-mediated delivery of RNA [7, 8, 27]. These approaches face potential barriers to them being used widely, including high costs and safety concerns associated with the introduction of genetic material into the body. | yes |
Gerontology | Can stem cell therapy reverse aging? | yes_statement | "stem" "cell" "therapy" can "reverse" "aging".. "aging" can be "reversed" through "stem" "cell" "therapy". | https://stemcellres.biomedcentral.com/articles/10.1186/s13287-021-02486-4 | Bone marrow mesenchymal stem cells derived from juvenile ... | Abstract
Background
Female sex hormone secretion and reproductive ability decrease with ageing. Bone marrow mesenchymal stem cells (BMMSCs) have been postulated to play a key role in treating ovarian ageing.
Methods
We used macaque ovarian ageing models to observe the structural and functional changes after juvenile BMMSC treatment. Moreover, RNA-seq was used to analyse the ovarian transcriptional expression profile and key pathways through which BMMSCs reverse ovarian ageing.
Results
In the elderly macaque models, the ovaries were atrophied, the regulation ability of sex hormones was reduced, the ovarian structure was destroyed, and only local atretic follicles were observed, in contrast with young rhesus monkeys. Intravenous infusion of BMMSCs in elderly macaques increased ovarian volume, strengthened the regulation ability of sex hormones, reduced the degree of pulmonary fibrosis, inhibited apoptosis, increased density of blood vessels, and promoted follicular regeneration. In addition, the ovarian expression characteristics of ageing-related genes of the elderly treatment group reverted to that of the young control group, 1258 genes that were differentially expressed, among which 415 genes upregulated with age were downregulated, 843 genes downregulated with age were upregulated after BMMSC treatment, and the top 20 differentially expressed genes (DEGs) in the protein-protein interaction (PPI) network were significantly enriched in oocyte meiosis and progesterone-mediated oocyte maturation pathways.
Conclusion
The BMMSCs derived from juvenile macaques can reverse ovarian ageing in elderly macaques.
Introduction
As females age, both fertility and ovarian endocrine function naturally decline due to waning follicle numbers as well as ageing-related cellular dysfunction [1, 2]. Currently, ovarian failure and endocrine disruption are not curable. Societal changes and the increasing desire to preserve fertility have led to various treatment methods, including sex hormone replacement, cytokines, and traditional Chinese medicine (TCM) treatments, to treat ovarian ageing, which regulates fertility and endocrine secretion. However, the long-term use of hormone replacement therapy may cause breast cancer, thrombosis, and other diseases [3]. Cytokine therapy has not yet developed into a large-scale industry and is expensive, characteristics that are not conducive to its widespread application [4]. TCM treatment can partially improve ovarian function, but TCM drug compositions have not been fully elucidated, and there are many uncertain factors [5]. Although assisted reproductive technologies (ARTs) and the “freeze-all” strategy of cryopreserving all oocytes or good-quality embryos have increased the range of options [6], the overall success rate for older women remains very low. Therefore, it is necessary to seek new and effective treatment methods.
Ageing ovaries manifest mainly with tissue atrophy, functional degeneration, insufficient self-renewal ability of reproductive helper cells, and decreased secretion of sex hormones. Bone marrow mesenchymal stem cells (BMMSCs) have multidirectional differentiation potential, a strong self-renewal capacity, and biological characteristics of exosomes secreted with various cytokines [7], and they may become a new tool to delay or reverse ovarian ageing [8]. Many clinical and basic studies have shown the effectiveness of mesenchymal stem cells (MSCs) in the treatment of ovarian ageing, and MSCs have been demonstrated to be more effective than other cell types in improving ovarian function [9]. Human amniotic fluid MSCs (hAFMSCs) can restore ovarian physiological ageing (OPA) function [10]. Human placental MSCs (hPMSCs) can inhibit oxidative stress and apoptosis, thereby improving ovarian function [11]. Exosomes secreted by human umbilical cord MSCs (hUC-MSCs) have a stimulatory effect on primordial follicles and accelerate follicular development [12]. These findings show that MSCs can regulate the secretion of female sex hormones and improve ovarian structure.
However, to date, research on animal models for BMMSC-mediated treatment of ageing and other diseases has focused on small- and medium-sized animals, and there are few studies on primates; furthermore, systematic and standardized studies are lacking. Therefore, in this study, we used a macaque ovarian ageing model as a research object and observed the structural and functional effects of juvenile macaque BMMSCs on ageing macaque ovaries. In addition, we explored the molecular regulatory mechanism by which BMMSCs reverse macaque ovarian ageing. This work provides a theoretical basis and a reference technical solution for the use of BMMSCs to treat ovarian ageing
Materials and methods
Materials
Macaques and BMMSC sources
Macaques were provided by the Kunming Institute of Zoology, Chinese Academy of Sciences, and the experiments were performed at the Cell Biological Therapy Center of the 920th Hospital of the Chinese People’s Liberation Army. The BMMSCs of juvenile male macaques were provided by our laboratory.
Methods
Evaluation of ovarian ageing models in the elderly macaques
Ovarian ageing models were evaluated according to the age, back and facial features, level of sex hormones, and ovarian morphological structure. Female macaques aged between 22 and 26 years old were used as the elderly group, while young female macaques aged between 6 and 8 years old were used as the control group. Five millilitres of whole blood was intravenously drawn and centrifuged to obtain serum, and 0.5 mL of supernatant was aspirated into a Unicel DXI800 Access Immunoassay System to detect the levels of sex hormones. Anesthetized macaques were used to remove ovarian tissues that were divided for size and morphological analysis and haematoxylin-eosin (HE) staining. Finally, ten healthy elderly female macaques and 5 healthy young female macaques were screened (see supplementary 2 for detailed steps).
Preparation of BMMSCs
BMMSCs of 2- to 3-year-old macaques were isolated and cultured by the adherence method. The morphology and growth characteristics of P0 to P4 BMMSCs were observed. P4 BMMSCs were used for flow cytometric analysis to determine the proportion of BMMSC surface antigens and for adipogenic, osteogenic, and chondrogenic induction and differentiation experiments based on methods published previously by our research group [13,14,15,16,17].
Macaques grouping and BMMSC transplantation treatment
According to the advice of breeding experts from the Kunming Institute of Zoology, Chinese Academy of Sciences, 10 elderly macaques were randomly divided into an elderly model group (n = 4) and an elderly treatment group (n = 6), and the remaining 5 macaques formed the young control group (n = 5). The P4 BMMSCs were diluted with 0.9% sterile sodium chloride solution to a concentration of 2 × 106 cells/mL. After the macaques of the treatment group had been fixed, BMMSCs were infused into via a femoral vein at a dose of 1 × 107 cells/kg per macaque once every other day for a total of 3 infusions. The macaques in the control and model groups were administered equal volumes of 0.9% sterile sodium (see supplementary 2 for detailed steps).
PET-CT observation of ovarian structure and function
Before the experiment, the macaques were fasted for 6 h, injected intravenously with 18F-FDG at a dosage of 3.70–4.44 MBq/kg for 60 min, and subjected to whole-body scanning with a GE DiscoveryTM PET/CT Elite system. CT was conducted using conventional whole-body spiral scanning with the following conditions: tube voltage 120 kV, tube current 240 mA, pitch 0.561, rotation speed 0.5 s/week, layer thickness 3.75 mm, and spacing 512 × 512. PET scanning was conducted with one bed position for 2 min. BestDicom software was used to analyse the different cross-sections, and the maximum standardized uptake value (SUVmax) and CT value were recorded.
Detection of sex hormone levels in peripheral blood
Five millilitres of peripheral blood was collected into a heparin tube at 3, 6, and 8 months after BMMSC treatment and centrifuged at 1500 r/min for 5 min. The supernatant was transferred to a 1.5-mL EP tube and centrifuged at 3000 r/min for 3 min; 0.5 mL of the supernatant was then added to the Unicel DXI800 Access Immunoassay System to detect the expression levels of AMH, hFSH, hLH, PRL, Prog, Testo, and E2.
Collection of macaque ovarian tissues
At 8 months after BMMSC treatment, the macaques were euthanized by anaesthesia with 3% sodium pentobarbital. The abdominal cavity was exposed, to find the uterus, then along the fallopian tube to find the position of the ovary and take it out, weighed (g) ovary on an electronic balance, and imaged. One ovary was sectioned in the horizontal and vertical directions into 4 pieces approximately 1 mm3 in size. Two of the sections were placed in a cryopreservation tube, to which 1.8 mL RNA protection solution was added, then stored in liquid nitrogen for transcriptome sequencing. The remaining two sections were fixed in 4% paraformaldehyde solution, dehydrated, embedded in paraffin, and sectioned at a thickness of approximately 4 μm for subsequent histopathological tests.
Determination of the histological structure of macaque ovarian tissues after BMMSC treatment
HE staining was performed to observe ovarian structure and follicles, Masson staining was performed to observe the degree of fibrosis, a TUNEL assay was performed to analyse apoptosis, immunohistochemical staining was used to observe the blood vessels, and immunofluorescence staining was performed to track BMMSCs (see supplementary file 1).
Transcriptome sequencing of ovarian tissue
Ovarian tissue was ground and lysed, and total RNA was extracted and sequenced. Raw data were obtained by high-throughput sequencing, and the reads were processed by adapter removal and quality control to obtain clean reads. FastQC was used to analyse the quality of sequencing data and obtain relevant information. Htseq-count was used to count the number of reads of some units in the genome. Differential expression analysis was performed with DESeq2. The GO and KEGG annotations of the identified differentially expressed genes (DEGs) were analysed, and Fisher’s exact test was used to calculate the significance level of each GO and pathway term.
Statistical analysis
Statistical analyses were performed using SPSS 21.0. The data are expressed as the mean ± standard deviation. The statistical significance between elderly model and young control group, and elderly treatment and model group was performed by T test.
Evaluation of elderly macaques as research models of ovarian ageing. a Facial features of young macaques. b Facial features of elderly macaques. c Back features of young macaques. d Back features of elderly macaques. e Ovarian morphology of young macaques. f Ovarian morphology of elderly macaques. g HE staining of a young macaque ovary (100×). h HE staining of a young macaque ovary (400×). i HE staining of an elderly macaque ovary (100×). j HE staining of an elderly macaque ovary (400×). k Statistical analysis of ovarian organ index of young and elderly macaques. l Statistical analysis of sex hormone secretion levels in young and elderly macaques
Morphology of BMMSCs
The growth state of BMMSCs was observed under an inverted fluorescence phase-contrast microscope. The results showed that a small number of primary BMMSCs migrated out in a short spindle shape after 3–4 days, and a large number of suspended impurities were present in the supernatant (Fig. 2a). The P4 fibroblast-like BMMSCs were densely arranged in a spiral pattern and exhibited a long spindle shape, obvious directionality, typical cell morphology characteristics, uniform morphology, and a strong refractive index (Fig. 2b). Subsequently, the P4 BMMSCs were labelled with and expressed enhanced green fluorescent protein (E-GFP) (Fig. 2c).
The major functions of the ovaries are to govern the health of the female by regulating endocrine status and the production of mature oocytes [18]. Therefore, the sex hormone levels in peripheral blood were assessed to evaluate the effects of BMMSCs on ageing ovaries. Compared to those before treatment, the levels of Prog and Testo were significantly increased at 3, 6, and 8 months (Fig. 3c; p < 0.01). The PRL level was significantly increased relative to the 0-month level at 8 months (Fig. 3c; p < 0.05). The E2 level was significantly increased (p < 0.01) at 3 months and had decreased by 6 and 8 months but remained higher than the level at 0 months (Fig. 3c; p < 0.05). However, the levels of hFSH, hLH, and AMH were not significantly different (Fig. 3c; p > 0.05) before and after BMMSC treatment.
Folliculogenesis is a precise and orderly process of internal coordination and external regulation in women [18]. A decline in ovarian function characterized by a decrease in both the quantity and quality of primordial follicles occurs with ageing [12]. In the present study, the changes in ovarian histopathological structure reflected the therapeutic effect of BMMSCs. Interestingly, ovary morphology was improved after BMMSC treatment (Fig. 4a). HE staining was used to visualize ovarian structures (Fig. 4b). In the young control group, primordial, primary, secondary (red arrows), and mature follicles (blue arrows) were observed, and contextual interstitial communication was obvious. In the elderly model group, no obvious follicle structure was observed, and large amounts of connective tissue (green arrows) and brown-yellow pigment deposition (black arrows) were observed in local areas. In the elderly treatment group, a number of primordial, primary, secondary (red arrows), and atretic follicles were observed, with clear ovarian structure.
Fig. 4
Ovarian histopathological observation after BMMSC treatment. a The morphology of ovaries. b HE staining was performed to observe ovarian structure (100 μm). c Masson staining was performed to observe the degree of fibrosis (50 μm). d TUNEL assay was performed to analyse apoptosis (100 μm). e Immunohistochemical staining was performed to observe the density of blood vessels (50 μm). f Immunofluorescence staining was performed to track BMMSCs in the ovary (50 μm). g Statistical analyses of the degree of fibrosis, the percentage of apoptotic cells, the density of blood vessels, and the ovarian organ index (*p < 0.05, **p < 0.01, and ***p < 0.001)
Fibrosis is a hallmark of ageing tissues, and the ovary is the first organ to show overt signs of ageing. Recent studies have demonstrated that ageing often leads to altered ovarian architecture and function, including increased fibrosis in the ovarian stroma. MSC transplantation has been shown to be an effective method to inhibit ovarian fibrosis and restore ovarian function [19, 20]. Masson staining was performed to observe the degree of fibrosis (Fig. 4c); in the figure, blue represents collagen fibres, and red represents cellulose. The percentage of collagen fibres was 10.61 ± 1.83% in the young control group. 56.79 ± 3.58% in the elderly model group, the deposition area was large, well-arranged, and disordered, and with few muscle fibres located locally. 23.71 ± 2.4% in the elderly treatment group, the fibres were mostly deposited in the cortex layer, the deposition area was small, and the arrangement was loose (Fig. 4c, e).
Follicular atresia is related to the apoptosis of granulosa cells, which are large in ovarian follicles; previous studies showed that MSCs improved the apoptosis [21, 22]. TUNEL assay was performed to analyse apoptosis, with red staining indicating apoptotic cells (Fig. 4d). The apoptosis rate was 1.07 ± 0.04% in the young control group, 25.93 ± 2.49% in the elderly model group, and 6.98 ± 1.35% in the elderly treatment group (Fig. 4g).
Previous studies have revealed that MSCs augment the density of FITC-dextran perfused blood vessels [23] and that intravenous injection of preconditioned MSCs improves microvascular dynamics [24]. We performed immunohistochemical staining to observe the density of blood vessels, with CD34-positive granules indicating blood vessels (Fig. 4e). The density of blood vessels was 114 ± 17 in the young control group, 73 ± 6 in the elderly model group, and 118 ± 18 in the elderly treatment group (Fig. 4g).
From the changes of ovarian histology after BMMSC treatment can be seen that BMMSCs improve ovarian structure and function, in order to assess whether BMMSCs exhibit homing to ovaries, immunofluorescence staining was performed to track BMMSCs in the ovary. Two immunofluorescent granules were detected in the elderly treatment group, while no immunofluorescence was observed in the elderly model group (Fig. 4f).
A total of 1258 genes were differentially expressed, and ageing-related genes partly returned to a young phenotype following BMMSC treatment, with the function correlated to Prog-mediated oocyte maturation.
After observing the effects of BMMSCs on ovarian ageing with respect to ovarian tissue structure and the secretion of sex hormones, RNA-seq was performed on ovarian tissue to identify key genes and signalling pathways. Cluster plots showed that 1258 genes were differentially expressed after BMMSC treatment (Fig. 5a). 3D-PCA trajectory analysis showed that the ovarian expression characteristics of ageing-related genes of the elderly treatment group reverted to that of the young control group (Fig. 5b). GO analysis showed that the DEGs were primarily enriched in terms related to the cell cycle (Fig. 5c). A total of 415 genes were upregulated with ageing and downregulated after BMMSC treatment (Fig. 5d) (p = 5.0e−18). A total of 843 genes were downregulated with ageing and upregulated after BMMSC treatment and were enriched in the NABA matrisome-associated and cytokine-mediated signalling pathways and metal ion homeostasis (Fig. 5e) (p = 4.5e−154). CytoHubba analysis revealed the top 20 DEGs in the protein-protein interaction (PPI) network (Fig. 5f), and ClueGO analysis showed that these DEGs were enriched primarily in the terms of cell cycle, oocyte meiosis, progesterone-mediated oocyte maturation, histone serine kinase activity, and protein threonine/histone/tyrosine serine kinase pathway (Fig. 5g).
Discussion
Ovarian ageing weakens female reproduction, ovulation, secretion of sex hormones, and other functions and affects the tissues and organs of the body. It is a gradual, multi-factorial, and complex biological process caused by the combined effects of the decreasing number and quality of follicles. MSC transplantation has been shown to be effective and safe as a new therapeutic method for ovarian ageing [25], which proposed to restore ovarian structure and function [26]. Interestingly, our results provide a comprehensive understanding of the regulation of BMMSC interaction with ovarian ageing.
In our study, PET-CT showed that ovarian volume increased, lesions decreased, and metabolism was vigorous after BMMSC treatment. Sex hormones are secreted by ovaries to carry out specific functions and affect other organs. Our results showed that Prog, Testo, PRL, and E2 were significantly increased after BMMSC treatment, while FSH, LH, and AMH were not significantly different before and after BMMSC treatment. These results are consistent with previous studies reporting the ability of MSCs to restore ovarian structure and sex hormone secretion [22, 27, 28]. Furthermore, previous studies have demonstrated that the number of MSCs in different cell cycle stages can be adjusted by adjusting the concentrations of sex hormones [29, 30], suggesting that after BMMSCs restore the secretion of sex hormones, sex hormones may in turn regulate the biological function of MSCs.
In our study, a comparative analysis of the HE staining results between the elderly treatment and model groups showed that BMMSCs improved ovarian structure and promoted follicle regeneration. These results are consistent with these reported in previous studies on MSC treatment of ovarian structure destruction and functional decline [28, 31, 32]. Interestingly, a previous study demonstrated the presence of adult oogonial stem cells (OSCs) in the adult axolotl salamander ovary and showed that ovarian injury induces OSC activation and functional regeneration of the ovaries [33]. In addition, OSC activation resulted in rapid differentiation into new oocytes, and follicle cell proliferation promoted follicle maturation during ovarian regeneration [34]. These results indicate that transplanted BMMSCs home to the ovaries or function via the paracrine pathway to regulate the ovarian microenvironment to activate OSCs, thereby promoting follicle regeneration and improving ovarian structure.
Ovaries typically become fibrotic with ageing, which leads to ovarian structural dysfunction and function decline [35]. Therefore, alleviating or reversing fibrotic ovaries is a strategy to treat ovarian ageing. In our study, Masson staining showed that the degree of fibrosis was significantly decreased after BMMSC treatment. Previous studies have shown similar effectiveness of MSCs in inhibiting ovarian fibrosis [20, 36], and the mechanism involves mainly MSC-mediated inhibition of inflammatory factors [37]. These results suggested that BMMSCs inhibit the inflammatory response by secreting various immune and inflammation regulatory factors to reduce the degree of ovarian fibrosis. However, their regulation of ovarian tissue fibrosis has not been shown to restore to the level observed in young macaques.
In this study, our TUNEL assay showed that apoptosis was significantly decreased after BMMSC treatment. These results are consistent with these reported in previous studies on MSCs inhibited apoptosis to treat ageing-related diseases [38,39,40]. Additionally, a study of MSC-treated follicle loss has shown that MSCs suppressed the expression of apoptotic genes and had antiapoptotic effects [41]. These results suggest that BMMSCs reduced the apoptosis of ageing ovarian cells to balance cell proliferation with apoptosis, to increase the number of reproductive helper cells.
Our RNA-seq analysis of ovarian tissue identified 1258 genes that were differentially expressed, 415 of which were genes upregulated with age and downregulated after BMMSC treatment, and 843 of which were genes downregulated with age and upregulated after BMMSC treatment, and the ovarian expression characteristics of ageing-related genes partly returned to a young phenotype following BMMSC treatment. Moreover, the top 20 DEGs in the PPI were primarily enriched in the terms of cell cycle, oocyte meiosis, and progesterone-mediated oocyte maturation; these results suggest that the ovarian transcriptional expression profile of rhesus monkeys in the elderly treatment group shifted to a younger direction, and the BMMSCs derived from juvenile macaques could fully reverse the process of ovarian ageing at the molecular level and significantly reduce the content of ageing-related molecules. Interestingly, the top 20 DEGs in the PPI are detrimental to maintaining ovarian structure and function, in particular, the enrichment in the pathways involving oocyte meiosis and Prog-mediated oocyte maturation were consistent with the findings of our in vivo experiments, which demonstrated that ovarian structure was improved, new follicles appeared, and Prog levels increased steadily after BMMSC treatment, which indicate that the Prog-mediated oocyte maturation pathway plays a key role in the reversal of ovarian ageing by BMMSCs, and that the associated genes CCNB1, CCNB2, CCNB1, BUB1, CDC20, and CDK1 may become new therapeutic targets in BMMSC treatment of ovarian ageing.
In summary, BMMSCs regulate the secretion of sex hormones, suppress cell apoptosis, inhibit the degree of fibrosis, reverse the process of ovarian ageing at the molecular level, and significantly reduce the content of ageing-related molecules; these effects restore ovarian structure and function, to promote follicle and blood vessel regeneration.
Conclusions
i.
In the elderly macaque model of ovarian ageing, the ovarian organ index was decreased; ovarian atrophy and structural destruction occurred, with only local atretic follicles observed; FSH and LH levels were increased, while Testo, E2, Prog, CG, and AMH levels were decreased.
Acknowledgements
We thank everyone on our team for assisting with the preparation of this manuscript.
Funding
This work was supported by grants from the Yunnan Science and Technology Plan Project Major Science and Technology Project (2018ZF007) and the project entitled Transformation of subtotipotent stem cells based on the tree shrew model of multiple organ dysfunction syndrome SYDW[2020]19.
Author information
Authors and Affiliations
The Basic Medical Laboratory of the 920th Hospital of Joint Logistics Support Force of PLA, The Transfer Medicine Key Laboratory of Cell Therapy Technology of Yunan Province, The Integrated Engineering Laboratory of Cell Biological Medicine of State and Regions, Kunming, 650032, Yunnan Province, China
Contributions
XHP and XQZ designed the study. CT, JH, ZLY, HP, DHY, GKL, YL, YKY, YYW, and GHZ performed the experiments and collected the data. CT wrote the manuscript. YYA and ZXH assisted with the literature searches and revised the manuscript. All authors read and approved the final manuscript.
Corresponding authors
Ethics declarations
Ethics approval and consent to participate
Animal production licence number: SCXK (Dian) K2017-0003. The use of macaques was approved by the experimental animal ethics committee of the relying unit, and the approval number was Lengshen 2019-032 (Section)-01 with the animal licence number SYXK (Military) 2012-0039.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. | However, the long-term use of hormone replacement therapy may cause breast cancer, thrombosis, and other diseases [3]. Cytokine therapy has not yet developed into a large-scale industry and is expensive, characteristics that are not conducive to its widespread application [4]. TCM treatment can partially improve ovarian function, but TCM drug compositions have not been fully elucidated, and there are many uncertain factors [5]. Although assisted reproductive technologies (ARTs) and the “freeze-all” strategy of cryopreserving all oocytes or good-quality embryos have increased the range of options [6], the overall success rate for older women remains very low. Therefore, it is necessary to seek new and effective treatment methods.
Ageing ovaries manifest mainly with tissue atrophy, functional degeneration, insufficient self-renewal ability of reproductive helper cells, and decreased secretion of sex hormones. Bone marrow mesenchymal stem cells (BMMSCs) have multidirectional differentiation potential, a strong self-renewal capacity, and biological characteristics of exosomes secreted with various cytokines [7], and they may become a new tool to delay or reverse ovarian ageing [8]. Many clinical and basic studies have shown the effectiveness of mesenchymal stem cells (MSCs) in the treatment of ovarian ageing, and MSCs have been demonstrated to be more effective than other cell types in improving ovarian function [9]. Human amniotic fluid MSCs (hAFMSCs) can restore ovarian physiological ageing (OPA) function [10]. Human placental MSCs (hPMSCs) can inhibit oxidative stress and apoptosis, thereby improving ovarian function [11]. | yes |
Gerontology | Can stem cell therapy reverse aging? | yes_statement | "stem" "cell" "therapy" can "reverse" "aging".. "aging" can be "reversed" through "stem" "cell" "therapy". | https://www.frontiersin.org/articles/10.3389/fcell.2020.588050 | Reversed Senescence of Retinal Pigment Epithelial Cell by ... | Retinal pigment epithelium (RPE) cellular senescence is an important etiology of age-related macular degeneration (AMD). Aging interventions based on the application of stem cells to delay cellular senescence have shown good prospects in the treatment of age-related diseases. This study aimed to investigate the potential of the embryonic stem cells (ESCs) to reverse the senescence of RPE cells and to elucidate its regulatory mechanism. The hydrogen peroxide (H2O2)-mediated premature and natural passage-mediated replicative senescent RPE cells were directly cocultured with ESCs. The results showed that the proliferative capacity of premature and replicative senescent RPE cells was increased, while the positive rate of senescence-associated galactosidase (SA-β-GAL) staining and levels of reactive oxygen species (ROS) and mitochondrial membrane potential (MMP) were decreased. The positive regulatory factors of cellular senescence (p53, p21WAF1/CIP1, p16INK4a) were downregulated, while the negative regulatory factors of cellular senescence (Cyclin A2, Cyclin B1, Cyclin D1) were upregulated. Furthermore, replicative senescent RPE cells entered the S and G2/M phases from the G0/G1 phase. TGFβ (TGFB1, SMAD3, ID1, ID3) and PI3K (PIK3CG, PDK1, PLK1) pathway-related genes were upregulated in premature and replicative senescent RPE cells after ESCs application, respectively. We further treated ESCs-cocultured premature and replicative senescent RPE cells with SB531542 and LY294002 to inhibit the TGFβ and PI3K pathways, respectively, and found that p53, p21WAF1/CIP1 and p16INK4a were upregulated, while Cyclin A2, Cyclin B1, Cyclin D1, TGFβ, and PI3K pathway-related genes were downregulated, accompanied by decreased proliferation and cell cycle transition and increased positive rates of SA-β-GAL staining and levels of ROS and MMP. In conclusion, we demonstrated that ESCs can effectively reverse the senescence of premature and replicative senescent RPE cells by a direct coculture way, which may be achieved by upregulating the TGFβ and PI3K pathways, respectively, providing a basis for establishing a new therapeutic option for AMD.
Introduction
Age-related macular degeneration (AMD) is the major cause of blindness around the world (Mitchell et al., 2018), and there is currently no effective treatment for AMD. In recent years, the number of AMD patients has increased year by year and is estimated to reach 288 million in 2040 (Wong et al., 2014), resulting in a heavy social burden. Intravitreal injection of anti-vascular endothelial growth factor (VEGF) drugs is currently the most effective treatment for neovascular AMD, but it is expensive and is easy to relapse after drug withdrawal. There is currently no effective treatment for dry AMD. Although stem cells can be used to differentiate into functional retinal pigment epithelium (RPE) cells, there are some problems, such as low differentiation efficiency, tumorigenicity and unknown safety issues (Mandai et al., 2017), which limits their clinical application. Hence, how to use stem cells safely and effectively in the treatment of AMD is an urgent problem to be solved.
Retinal pigment epithelium cellular senescence is one of the main factors in the development of AMD (Wang et al., 2019). Hence, prevention and reversal of RPE cellular senescence may be a therapeutic strategy for AMD. Antioxidant drugs, such as fullerenol and humanin, have been applied to reduce oxidative stress and DNA damage in RPE cells (Zhuge et al., 2014; Sreekumar et al., 2016), thereby delaying RPE cellular senescence. However, almost all drugs have off-target and bystander effects. In addition, delaying cellular senescence cannot clear existing senescent cells, so the progression of age-related diseases cannot be stopped by antioxidant drugs. Therefore, finding a method to effectively reverse RPE cellular senescence may provide new insight for AMD treatment.
The embryonic microenvironment can reverse somatic cellular senescence. The cloning of Dolly the sheep is a good example. A mature mammary gland cell can be reprogrammed into a stem cell under the influence of the embryonic microenvironment and can ultimately result in the cloning of a new individual. However, this embryonic microenvironment cannot be used clinically. The embryonic stem cells (ESCs) can mimic the role of the embryonic microenvironment in vitro. Studies have shown that ESC-conditioned medium can enhance the survival of bone marrow precursor cells (Guo et al., 2006) and reduce the aging phenotype of senescent skin fibroblasts (Bae et al., 2016). We previously demonstrated that the ESC-conditioned medium could promote the proliferation of corneal epithelial and endothelial cells in vitro (Liu et al., 2010; Lu et al., 2010), and showed that ESCs could maintain stemness in corneal epithelial cells by the transwell indirect coculture and the cell-contact-cell direct coculture ways, which was achieved by regulating the telomerase pathway (Zhou et al., 2011), with telomerase shortening being an important indicator of cellular senescence. In addition, we also demonstrated that the ESCs can reverse the malignant phenotype of tumors by a direct coculture way and promote the proliferation of normal skin tissues adjacent to tumors (Liu et al., 2019). Therefore, ESCs may have the potential to reverse the senescence of RPE cells.
On this basis, we applied ESCs to hydrogen peroxide (H2O2)-mediated premature senescent RPE cells and natural passage-mediated replicative senescent RPE cells by a direct coculture way in this study. Cellular senescence was dynamically assessed according to the changes in the proliferative capacity of RPE cells, senescence-associated galactosidase (SA-β-GAL) staining activity, cell cycle distribution, levels of reactive oxygen species (ROS) and mitochondrial membrane potential (MMP), and expression of cellular senescence markers (p53, p21WAF1/CIP1, p16INK4a, Cyclin A2, Cyclin B1, and Cyclin D1). The mechanism was further clarified by transcriptome sequencing (RNA-seq), RT-PCR, western blotting and immunofluorescence, aiming to provide a new therapeutic option for stem cell therapy for AMD.
Materials and Methods
Cell Culture
Human primary RPE cells were obtained from the eyeballs of donors aged 20–40 who died unexpectedly without eye diseases from the Eye Bank of Guangdong Province (Zhongshan Ophthalmic Center, Sun Yat-sen University) in line with the principles of the Declaration of Helsinki for research involving human tissues. Approval was granted by the Ethics Committee of Zhongshan Ophthalmic Center, Sun Yat-sen University (Ethics approval number: 2020KYPJ031). The cell sampling method was performed as described previously (Rabin et al., 2013). RPE cells were cultured in DMEM/F-12 (Corning, United States) medium containing 1% penicillin-streptomycin (Gibco, Australia) and 10% fetal bovine serum (Gibco) and passaged at a density of 6000/cm2 every 2–3 days. Mouse ESC-E14s were provided by Prof. Andy Peng Xiang from Sun Yat-sen University, China (Chen et al., 2006). Then, we used green fluorescent protein to label this cell line to construct the ESC-GFP cell line (Zhou et al., 2014). The ESCs mentioned below are referred to as ESC-E14s-GFP cells. ESCs were cultured as described previously (Liu et al., 2019) and passaged at a density of 1 × 104/cm2 every 2–3 days. All cells were cultured in an incubator containing 5% CO2 at 37°C.
Establishment of the Cellular Senescence Model and Coculture System
Retinal pigment epithelium cells from passages 4 to 6 were used in the premature senescence model. RPE cells were treated with 0, 100, 200, 300, 400, and 500 μM H2O2 in serum-free medium for 4 h and then cultured in complete medium for another 44 h. Next, these cells were collected for cell proliferation and SA-β-GAL staining detection to determine the optimal H2O2 concentration. After determining the optimal H2O2 concentration, RPE cells were divided into the following groups: (1) PR group: RPE cells cultured in serum-free medium for 4 h and then cultured in complete medium for another 44 h; (2) PRH group: RPE cells cultured in serum-free medium containing 400 μM H2O2 for 4 h and then cultured in complete medium for another 44 h; (3) PRHE group: RPE cells cultured in serum-free medium containing 400 μM H2O2 for 4 h and then directly cocultured with ESCs at a 1:2 ratio in complete medium for another 44 h; and (4) PRHE-SB group: RPE cells cultured in serum-free medium containing 400 μM H2O2 for 4 h and then directly cocultured with ESCs at a 1:2 ratio in complete medium containing 10 μM SB431542 (MedChemExpress, United States) for another 44 h. RPE cells from passages 8 to 10 were used in the replicative senescence model. RPE cells were divided into the following groups: (1) RR group: RPE cells cultured in complete medium for 48 h; (2) RRE group: RPE cells directly cocultured with ESCs at a 1:2 ratio in completed medium for 48 h; and (3) RRE-LY group: RPE cells directly cocultured with ESCs at a 1:2 ratio in complete medium containing 10 μM LY294002 (MedChemExpress) for 48 h.
Cell Counting Kit 8 (CCK-8) Cell Proliferation Assay
When determining the optimal concentration of H2O2, RPE cells were plated in a 96-well plate at 1800 cells/well for 24 h, cultured with 0–500 μM H2O2 for 4 h, and then cultured in complete medium for another 44 h. Cells from each group were collected and plated in a 96-well plate at 300 cells/well. After 24 h, 10 μl CCK-8 (Dojindo, Japan) solution was added to each well and incubated for 3 h with 5% CO2 at 37°C. The optical density was measured by a microplate reader (BioTek, United States) at 450 nm. The CCK-8 assay was performed continuously for 7 days.
SA-β-GAL Staining Activity
Retinal pigment epithelium cells of each group were collected and plated into a 6-well plate at 1 × 106 cells/well overnight. According to the instructions (Cell Signaling Technology, United States), cells in each well were fixed with 1 ml of 1 × fixative solution for 10–15 min and washed twice with PBS. One milliliter of 1 × β-galactosidase staining solution was added to each well. Cells were incubated in a drying oven at 37°C for 12–14 h and washed twice with PBS. Then, 70% glycerin was added to each well. The SA-β-GAL+ cells with blue perinuclear staining were observed by a microscope (Leica, Germany). At least three fields were randomly selected to calculate the positive staining rate.
ROS Assay
According to the manufacturer’s instructions (Abcam, United Kingdom), the 2′,7′-dichlorofluorescein diacetate (DCFDA) solution was diluted 1000 times with PBS. At least 2 × 105 cells of each group were incubated with 500 μl DCFDA-PBS working solution for 30 min with 5% CO2 at 37°C. The mean fluorescence intensity (Ex485 nm/Em535 nm) was measured by a flow cytometer (BD LSRFortessa, United States).
MMP Assay
Cells from each group were plated into a 96-well plate with a black bottom at a density of 1 × 104 cells/well. On the second day, cells were incubated with 100 μl of 200 nM TMRE (Cell Signaling Technology) for 30 min in a dark incubator with 5% CO2 at 37°C and then washed twice with PBS. The mean fluorescence intensity (Ex550 nm/Em580 nm) was measured by a microplate reader (BioTek).
Cell Cycle Analysis
A total of 1 × 106 cells from each group were fixed with ice-cold 70% ethanol and placed at 4°C overnight. The next day, after washing with PBS, the cells were incubated with 0.5 ml FxCycleTM PI/RNase (Invitrogen) for 15–30 min at room temperature. The cell cycle distribution (Ex488 nm) was detected by a flow cytometer (BD LSRFortessa).
RT-PCR
Total RNA was extracted using an RNeasy mini kit (QIAGEN, Germany). The concentration of total RNA was measured using a NanoDrop 1000TM spectrophotometer (Thermo Fisher Scientific, United States). Reverse transcription was performed using the SYBR PrimeScriptTM Master Mix kit (Takara, Japan). PCR was performed using the SYBR Premix Ex Taq Kit (Takara). The mRNA expression was measured by a LightCycler 480 (Roche, Switzerland). GAPDH was used as an internal reference. The primer sequences are shown in Table 1.
TABLE 1
Table 1. Primer sequences for RT-PCR analysis.
Western Blotting
Cells from each group were collected, and 200 μl of 1× sodium dodecyl sulfate (SDS) was added to 1 × 106 cells to lyse the cells on ice for 30 min. The cell lysates were boiled for 10 min on a dry thermostat (Essenscien, United States) and centrifuged at 14000 rpm for 20 min. Finally, the supernatants were extracted from the cell lysis solutions and stored at −80°C for later use. Protein quantification was performed using a BCA Protein Assay Kit (Bio-Rad, Canada). A 30 μg sample from each group was loaded in a 10% SDS-PAGE gel. After electrophoresis for 1.5 h, the proteins on the gel were transferred to a PVDF membrane at 300 mA for 3 h. Then, the membrane was blocked with 5% non-fat dry milk in TBST (Tris-buffered saline with 0.1% Tween-20) for 2 h and incubated with primary antibodies at 4°C overnight. After washing three times with TBST, the membrane was incubated with horseradish peroxidase-linked anti-mouse (1:5000, Sigma) and horseradish peroxidase-linked anti-rabbit (1:5000, Sigma) secondary antibodies for 1.5 h at room temperature. After washing three times with TBST again, the intensity of the protein bands was detected by a ChemiDoc MP imaging system (Bio-Rad, United States) using an ECL substrate (Thermo Fisher Scientific, United States). Information on the primary antibodies is shown in Supplementary Table 1. Quantification results are shown in Supplementary Figure 1.
Immunofluorescence
Cells of each group were collected and seeded in a 6-well plate with coverslips at a density of 5 × 104 cells per well. After attaching to the plate, the cells were fixed with 4% paraformaldehyde, permeabilized with 0.3% Triton X-100, and blocked with goat serum at room temperature for 20 min. Then, the cells were incubated with the primary antibodies at 4°C overnight, after which they were incubated with Alexa Fluor 594 donkey anti-mouse lgG secondary antibody or Alexa Fluor 594 donkey anti-rabbit lgG secondary antibody (1:100, Invitrogen) at room temperature for 1 h. The nuclei were stained with Hoechst 33258 (1:2000, Invitrogen) for 10 min. Finally, an anti-fluorescence quenching agent (Bosterbio, United States) was used to prevent fluorescence quenching. PBS was used for washing three times after every step. Immunofluorescence images were taken by a laser scanning confocal microscope (LSM 800; Carl Zeiss, Germany). Cells incubated with PBS instead of primary antibody were used as negative controls. Information on the primary antibodies is shown in Supplementary Table 1.
Statistical Analysis
The statistical analyses were performed by GraphPad Prism 7.0 software. Differences between two groups were analyzed using the two-tailed unpaired Student’s t-test, and One-way ANOVA or Two-way ANOVA were used for comparing more than two groups. All data are presented as the mean ± standard deviation (SD). P-values < 0.05 were considered statistically significant.
Results
Identification of RPE Cells
The morphology of RPE cells is shown in Figures 1A,B. RPE cells in passage 0 had a paving stone morphology and contained a large amount of black pigment (Figure 1A). With increasing passage times, RPE cells gradually became spindle-shaped, and the intracellular pigment decreased (Figure 1B). The results of western blotting (Figure 1C) and immunofluorescence (Figure 1D) showed that the RPE-specific marker RPE65 was expressed in these cells, and other markers for retinal vascular cells, including PDGFRβ and CD31, were not expressed in human RPE cells (Figures 1E,F), indicating the pure RPE cells were applied in this study.
FIGURE 1
Figure 1. Identification of retinal pigment epithelium (RPE) cells. (A) The morphology of RPE cells from passage 0 by phase contrast microscopy. Scale bar, 100 μm. (B) The morphology of RPE cells from passage 4 by phase contrast microscopy. Scale bar, 100 μm. (C) Western blots of RPE65 in RPE cells. β-Actin served as the internal control. (D) Immunofluorescence assays of RPE65 in RPE cells. Scale bar, 50 μm. (E) Western blots of PDGFRβ in RPE cells. NIH/3T3 cell was used as the positive control. β-Actin served as the internal control. (F) Western blots of CD31 in RPE cells. Human umbilical vein endothelial cell (HUVEC) was used as the positive control. β-Actin served as the internal control.
SA-β-GAL is the most commonly used indicator of cellular senescence (Piechota et al., 2016). The commonly used models of cellular senescence include (1) stress-mediated premature senescence, which is triggered or accelerated by external factors, independent of telomere shortening, and (2) natural passage-mediated replicative senescence, which represents the limitation of cell proliferation in vitro due to telomere shortening (Kida and Goligorsky, 2016; de Magalhaes and Passos, 2018). The retina is vulnerable to oxidative stress because it is rich in mitochondria and contains easily oxidized polyunsaturated fatty acids (PUFAs) (Blasiak et al., 2016). RPE cells are in a chronic oxidative stress state under long-term exposure to light and sustained oxidative stress can lead to DNA damage and a series of cellular senescence reactions (Marazita et al., 2016; Felszeghy et al., 2019; Kaarniranta et al., 2019), indicating that oxidative stress is one of the pathogenic factors of AMD. Hence, oxidative stress was used in this study to establish a model of premature senescence of RPE cells. The application of premature senescent and replicative senescent RPE cells as experimental cells can better reflect the comprehensive role of the ESCs in reversing RPE cellular senescence. In premature senescent RPE cells, the positive rate of SA-β-GAL staining (Figures 2A,B) and proliferation capacity (Figure 2C) were positively and negatively correlated with the H2O2 concentration, respectively. To ensure that the RPE cells have a certain SA-β-GAL positive staining rate to successfully represent cellular senescence and have a certain proliferative capacity for adequate cell collection, 400 μM H2O2 was selected as the final experimental concentration. As shown in Figures 2D,E, the positive rate of SA-β-GAL staining gradually increased with increasing cell passages, and there was a significant difference from passage 8. Finally, we selected RPE cells from passages 8 to 10 to represent replicative senescent cells.
The CCK-8 assay results showed the proliferation ability and cell growth curve of each group of cells. As shown in Figure 3A, the proliferative capacity of the PRH group was significantly lower than that of the PR group. However, the proliferative capacity of the PRHE (Figure 3A) and RRE (Figure 3B) groups was significantly higher than that of the PRH and RR groups, indicating that the ESCs can improve the proliferative capacity of premature and replicative senescent RPE cells by a direct coculture way.
FIGURE 3
Figure 3. The cocultured embryonic stem cells (ESCs) increased the proliferative capacity of premature and replicative senescent RPE cells. (A) Proliferation of the PR, PRH, and PRHE groups, as assessed by a CCK-8 proliferation assay (n = 3 biological repeats). (B) Proliferation of the RR and RRE groups, as assessed by a CCK-8 proliferation assay (n = 3 biological repeats). Data are presented as the mean ± SD. ∗∗P < 0.01; ****P < 0.0001. PR: RPE cells of the control group from passages 4 to 6; PRH: RPE cells from passages 4 to 6 treated with 400 μM H2O2; PRHE: RPE cells from passages 4 to 6 treated with 400 μM H2O2 and then cocultured with ESCs. RR: RPE cells of the control group from passages 8 to 10; RRE: RPE cells from passages 8 to 10 cocultured with ESCs.
Cell cycle arrest is one of the hallmarks of cellular senescence, which results in limited proliferative capacity (Nacarelli and Sell, 2017). As shown in Figure 4A, the proportion of premature senescent RPE cells in G0/G1 phase was decreased from 66.24 ± 13.46% to 44.99 ± 11.91% (p = 0.006), and the proportion in G2/M phase was increased from 17.96 ± 2.089% to 46.09 ± 5.093% (p = 0.0006) compared to the PR group, suggesting that H2O2-mediated premature senescence in RPE cells is mainly manifested as G2/M arrest, which is consistent with other studies (Santa-Gonzalez et al., 2016; Zhang et al., 2016). However, the cell cycle distribution of the PRHE group was not significantly different from that of the PRH group. In the replicative senescence model (Figure 4B), the proportion of ESCs-cocultured replicative RPE cells in G0/G1 phase was decreased from 60.19 ± 1.533% to 39.78 ± 1.545% (p < 0.0001) compared to that in the RR group. In particular, the proportions of RPE cells entering S phase (13.54 ± 0.8122%, p = 0.0002) and G2/M phase (23.15 ± 0.714%, p = 0.0013) were higher in the RRE group than those in the RR group (S phase: 6.493 ± 2.349%; G2/M phase: 17.48 ± 1.103%), indicating that the ESCs can enhance the proliferative capacity of replicative RPE cells mainly by promoting the cell cycle transition through a direct coculture way.
SA-β-GAL staining was observed by a light microscope. In the premature senescence model, the positive rate of SA-β-GAL staining in the PRH group was increased to 66.98 ± 5.437% compared with that in the PR group (14.48 ± 1.198%, p < 0.0001) (Figures 5A,C). After coculture with ESCs, the positive rate of SA-β-GAL staining in the PRHE group was decreased to 36.65 ± 1.866% (p < 0.0001) compared to that in the PRH group (Figures 5A,C). In the replicative senescence model, the positive rate of SA-β-GAL staining decreased from 21.33 ± 1.427% (RR group) to 8.014 ± 0.8235% (RRE group) (Figures 5B,D).
Reactive oxygen species and MMP are commonly used indicators of cellular senescence (Lee et al., 2006; Velarde et al., 2012; Banerjee and Mandal, 2015). Mean fluorescence intensity was used to indicate intracellular ROS and MMP levels. In the premature senescence model, the levels of ROS (6637 ± 177.5, p < 0.0001) (Figures 5E,G) and MMP (635 ± 14.36, p < 0.0001) (Figure 5I) in the PRH group were higher than those in the PR group. After coculture with ESCs, the levels of ROS (5329 ± 86.63, p < 0.0001) (Figures 5E,G) and MMP (517.8 ± 20.03, p = 0.003) (Figure 5I) in the PRHE group were decreased compared with those in the PRH group. In the replicative senescence model, the levels of ROS (3108 ± 673.8, p = 0.0229) (Figures 5F,H) and MMP (208.9 ± 9.341, p = 0.0001) (Figure 5J) in the RRE group were also decreased compared to those in the RR group.
To further verify the role of ESCs in reversing the senescence of RPE cells, we detected classical senescence-related positive (p53, p21WAF1/CIP1, and p16INK4a) and negative (Cyclin A2, Cyclin B1, and Cyclin D1) markers. Cyclin A2, Cyclin B1, and Cyclin D1 are cell cycle-dependent kinases that positively regulate the cell cycle (Bendris et al., 2015). As shown in Figure 6, p21WAF1/CIP1 was increased and Cyclin A2 and Cyclin B1 were decreased in the PRH group compared to the PR group. However, after coculture with ESCs, p21WAF1/CIP1 was decreased and Cyclin A2 and Cyclin B1 were increased in the PRHE group. In the replicative senescence model (Figure 7), p53, p21WAF1/CIP1, and p16INK4a were downregulated, while Cyclin A2, Cyclin B1, and Cyclin D1 were upregulated in the RRE group, further suggesting that ESCs can reverse the premature and replicative senescence of RPE cells by downregulating senescence-related positive markers and upregulating senescence-related negative markers through a direct coculture way.
The Cocultured ESCs Reversed the Premature and Replicative Senescence of RPE Cells by Regulating the TGFβ and PI3K Pathways, Respectively
To further explore the specific mechanism by which the cocultured ESCs reversed RPE cellular senescence, RNA-seq was performed. The heat map of all differentially expressed genes (DEGs) in the premature and replicative senescent models were shown in Figures 8A,B. In order to clarify the mechanism via a whole regulatory network, we focused on the commonly used KEGG pathway analysis, which incorporates current knowledge of molecular interactive networks. The q value calculated by hypergeometric test was used to indicate the enrichment degree of the KEGG pathway. The smaller q value indicates the higher enrichment degree of KEGG pathway. In the premature senescence model, we firstly excluded KEGG pathways irrelevant to cellular senescence and RPE or KEGG pathways with enriched genes not reflecting the whole regulatory role of cellular senescence. Secondly, the representative genes in KEGG pathways that might be involved in regulating cellular senescence were in turn verified by RT-PCR or western blotting according to the q values from small to large, and we further excluded KEGG pathways with representative genes not significantly differentially expressed in RT-PCR. Finally, the TGFβ pathway was concerned (Figure 8C). After verified by RT-PCR, western blotting and immunofluorescence, the TGFβ pathway-related genes, including transforming growth factor beta 1 (TGFB1), SMAD family member 3 (SMAD3), inhibitor of DNA binding 1 (ID1), and inhibitor of DNA binding 3 (ID3) were decreased in the PRH group but increased in the PRHE group (Figure 9). The KEGG pathway enrichment analysis of RNA-seq data in the replicative senescence model is shown in Figure 8D. The reasons why the PI3K pathway was concerned in the replicative senescence model are similar with those of TGFβ pathway selection in the premature senescence model. The results showed that after verified by RT-PCR, western blotting and immunofluorescence, the PI3K pathway-related genes, including phosphatidylinositol-4,5-bisphosphate 3-kinase catalytic subunit gamma (PIK3CG), pyruvate dehydrogenase kinase 1 (PDK1), and polo like kinase 1 (PLK1), were increased in the RRE group compared to the RR group (Figure 10), indicating that the cocultured ESCs reversed the premature and replicative senescence of RPE cells by activating the TGFβ and PI3K pathways, respectively.
FIGURE 8
Figure 8. Results of RNA-seq in ESCs-cocultured premature and replicative senescent RPE cells. (A) The heat map of all differentially expressed genes for the PRH and PRHE groups (n = 3 biological repeats). (B) The heat map of all differentially expressed genes for the RR and RRE groups (n = 3 biological repeats). The horizontal axis represents three biological repeats of the samples, and the vertical axis represents the genes. The red color indicates upregulated expression and the blue color indicates downregulated expression. (C) The bar diagram of Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis of the PRH and PPHE groups (n = 3 biological repeats). (D) The bar diagram of KEGG pathway analysis of the RR and RRE groups (n = 3 biological repeats). The abscissa represents the enrichment degree. The greater –log10 q value indicates the higher enrichment degree of KEGG pathway. The ordinate represents the name of the KEGG pathway.
Figure 10. The cocultured ESCs upregulated PI3K pathway-related markers of replicative senescent RPE cells. (A) Expression of PI3K pathway-related markers in the RR and RRE groups, as assessed by RT-qPCR (n ≥ 3 biological repeats). (B) Results from PI3K pathway-related markers in the RR and RRE groups as determined by western blotting. β-Actin was used as the internal reference. (C) The expression levels of PI3K pathway-related markers in the RR and RRE groups as determined by immunofluorescent staining. Scale bar, 50 μm. Data are presented as the mean ± SD. ∗P < 0.05; ****P < 0.0001.
Furthermore, SB-431542 (Scharpfenecker et al., 2009), a specific inhibitor of the TGFβ pathway, and LY294002 (Qin et al., 2013), a specific inhibitor of the PI3K pathway, were used in ESCs-cocultured premature and replicative senescent PRE cells. The results showed that the expression levels of TGFB1, SMAD3, ID1, ID3, Cyclin A2, and Cyclin B1 in the PRHE-SB group were reduced compared with those in the PRHE group (Figure 11). Similarly, the expression of PIK3CG, PDK1, PLK1, Cyclin A2, Cyclin B1, and Cyclin D1 in the RRE-LY group was decreased, while p53, p21, and p16 levels were increased compared with those in the RRE group (Figure 12). Moreover, the proliferative capacity of the PRHE-SB group was decreased (Figure 13A), while the positive rate of SA-β-GAL staining (83.37 ± 3.065%, p = 0.0003) and the levels of ROS (4016 ± 240, p = 0.0003) and MMP (1017 ± 68.6, p = 0.0051) were increased compared with those in the PRHE group (SA-β-GAL+:57.94 ± 2.197%; ROS:1888 ± 185.3; MMP:742 ± 51.21) (Figures 14A,C,E,G,I). The proportion of PRHE-SB group in G0/G1 phase was increased from 29.87 ± 1.929% to 51.25 ± 7.681% (p = 0.0004), and the proportion in G2/M phase was decreased from 52.21 ± 6.388% to 36.45 ± 1.415% (p = 0.0048) compared to the PRHE group (Figure 13C). Similarly, compared to the RRE group (SA-β-GAL+:17.17 ± 2.965%; ROS:1039 ± 178.9; MMP:199.3 ± 26.09), the proliferative capacity of RRE-LY group was decreased (Figure 13B), while the positive rate of SA-β-GAL staining (42.63 ± 5.149%, p = 0.0018) and the levels of ROS (1953 ± 388, p = 0.0208) and MMP (280.9 ± 4.221, p = 0.0059) were elevated (Figures 14B,D,F,H,J). The proportion of RRE-LY group in G0/G1 phase was increased from 34.33 ± 1.773% to 58.88 ± 6.783% (p < 0.0001), and the proportions of RPE cells entering S phase (12.01 ± 2.434%, p = 0.0001) and G2/M phase (15.14 ± 1.554%, p = 0.0337) were lower than those in the RRE group (S phase: 26.17 ± 5.011%; G2/M phase: 22.61 ± 1.286%) (Figure 13D). All of these results suggest that after the application of the corresponding inhibitors, the TGFβ pathway and PI3K pathway in ESCs-cocultured premature and replicative senescent PRE cells were inhibited, resulting in the downregulation of senescence-related positive markers, proliferation and cell cycle transition, and the upregulation of senescence-related negative markers, SA-β-GAL staining positive rate and levels of ROS and MMP, further suggesting that the cocultured ESCs reversed the premature and replicative senescence of RPE cells possibly by regulating the TGFβ and PI3K pathways, respectively.
FIGURE 11
Figure 11. Inhibition of the TGFβ pathway upregulated senescence-related positive markers and downregulated senescence-related negative markers in ESCs-cocultured premature senescent RPE cells. (A) Expression of the TGFβ pathway and cellular senescence-related markers in the PRHE and PRHE-SB groups, as assessed by RT-qPCR (n ≥ 3 biological repeats). (B) Results of the TGFβ pathway and cellular senescence-related markers in the PRHE and PRHE-SB groups as determined by western blotting. β-Actin was used as the internal reference. (C) The expression levels of the TGFβ pathway and cellular senescence-related markers in the PRHE and PRHE-SB groups as determined by immunofluorescent staining. Scale bar, 50 μm. Data are presented as the mean ± SD. ∗P < 0.05; ∗∗P < 0.01; ∗∗∗P < 0.001; ****P < 0.0001. PRHE-SB: RPE cells from passages 4 to 6 treated with 400 μM H2O2 and then cocultured with ESCs with 10 μM SB431542.
FIGURE 12
Figure 12. Inhibition of the PI3K pathway upregulated senescence-related positive markers and downregulated senescence-related negative markers in ESCs-cocultured replicative senescent RPE cells. (A) Expression of the PI3K pathway and cellular senescence-related markers in the RRE and RRE-LY groups, as assessed by RT-qPCR (n ≥ 3 biological repeats). (B) Results for the PI3K pathway and cellular senescence-related markers in the RRE and RRE-LY groups as determined by western blotting. β-Actin was used as the internal reference. (C) The expression levels of the PI3K pathway and cellular senescence-related markers in the RRE and RRE-LY groups as determined by immunofluorescent staining. Scale bar, 50 μm. Data are presented as the mean ± SD. ∗P < 0.05; ∗∗P < 0.01; ∗∗∗P < 0.001; ****P < 0.0001. RRE-LY: RPE cells from passages 8 to 10 cocultured with ESCs and treated with 10 μM LY294002.
Discussion
Currently, stem cell-induced differentiation and transplantation are the main anti-aging methods for treating age-related diseases (da Costa et al., 2016). Although stem cells can be used to differentiate into functional RPE cells, there are some problems, such as low differentiation efficiency, tumorigenicity and unknown safety issues (Mandai et al., 2017), which limits their clinical application. RPE cellular senescence is one of the important causes of AMD (Wang et al., 2019). Although senescence in RPE cells can be delayed by antioxidant drugs (Zhuge et al., 2014; Sreekumar et al., 2016), almost all drugs have off-target and bystander effects (Wang et al., 2019), and senescent cells cannot be cleared by delaying cellular senescence. Therefore, reversal of RPE cellular senescence may be an effective treatment for AMD.
Recently, utilizing a young environment as anti-aging treatment has been a research hotspot. For example, the plasma of young mice rejuvenated older mice (Rebo et al., 2016). Our previous studies found that ESC-conditioned medium promoted the proliferation of corneal epithelial and endothelial cells and increased the expression of stem cell markers in those cells (Liu et al., 2010; Lu et al., 2010). Furthermore, direct coculture had a stronger effect than indirect coculture assessed by the transwell assay and ESC-conditioned medium (Zhou et al., 2011). We also demonstrated that ESCs can reverse the malignancy of leukemia and choroidal melanoma by a direct coculture way and promote the proliferation of normal skin tissues adjacent to tumors. However, the microenvironment of mesenchymal stem cells did not have this effect (Zhou et al., 2014; Liu et al., 2019). On the basis of our previous studies, we used ESCs to directly coculture with RPE cells.
SA-β-GAL staining, ROS levels and MMP levels are the most commonly used markers of cellular senescence (Aravinthan, 2015; Jing et al., 2018). The nuclei of senescent cells can be stained blue with SA-β-GAL, which is a gold standard of cellular senescence detection (de Mera-Rodriguez et al., 2019). Oxidative stress often leads to increased intracellular ROS, and long-term accumulation of ROS is considered to be a driver of senescence (Velarde et al., 2012). If oxidative stress does not cause mitochondrial dysfunction but mitochondrial hyperpolarization, it leads to elevated MMP (Lee et al., 2006). Elevated MMP leads to an increase in ROS (Zorov et al., 2006), thus aggravating oxidative stress damage and cellular senescence. In this study, after coculture with ESCs, the positive rate of SA-β-GAL staining and the levels of ROS and MMP were decreased (Figure 5), which fully demonstrated the potential of cocultured ESCs to reverse the senescence of RPE cells.
ID family genes, downstream of the TGFβ pathway, participate in cell cycle regulation. ID proteins can downregulate p21WAF1/CIP1 and activate cyclin-dependent protein kinases (CDKs) by antagonizing class A and class B heterodimers, thereby promoting cell cycle transition. The overexpression of ID1 and ID3 can delay the senescence of human keratinocytes (Zebedee and Hara, 2001). TGFB1 can mediate ID expression through SMAD3 (Liang et al., 2009; Notohamiprodjo et al., 2012). In this study, we found that TGFB1, SMAD3, ID1, and ID3 were upregulated in ESCs-cocultured premature senescent RPE cells, accompanied by the downregulation of p21WAF1/CIP1 and the upregulation of Cyclin A2 and Cyclin B1 (Figures 8A, 9). After SB431542 was applied, TGFB1, SMAD3, ID1, and ID3 were decreased while Cyclin A2 and Cyclin B1 were increased (Figure 11) with the deceased proliferation and cell cycle transition (Figures 13A,C) and the increased positive rate of SA-β-GAL staining and levels of ROS and MMP (Figures 14A,C,E), suggesting that the cocultured ESCs reversed the premature senescence of RPE cells possibly by activating the TGFβ pathway, which then upregulated Cyclin A2 and Cyclin B1 and downregulated p21WAF1/CIP1.
Studies have shown that the activation of the PI3K pathway can delay cellular senescence (Chen et al., 2017; Chai et al., 2018; Wang et al., 2018). With increasing age, the activity of the PI3K pathway is decreased, resulting in reduced tolerance to stress-induced mitochondria and cell damage, while the activation of the PI3K pathway contributes to the recovery of the functions of senescent RPE cells (He et al., 2014), indicating that the PI3K pathway is involved in the regulation of RPE cellular senescence. PI3K can directly activate PDK1 to promote cell proliferation independently of AKT (Xia et al., 2018). PDK1 can directly mediate PLK1 phosphorylation (Tan et al., 2013). PLK1 is involved in many mitotic processes, and the upregulation of PLK1 can reverse part of the aging phenotype (Kim et al., 2013). Second, PLK1 can directly bind to p53 to antagonize its function (Shao et al., 2018). The expression of p21WAF1/CIP1 is increased after PLK1 is knocked out (Zhang et al., 2015). The results of this study showed that the expression of PIK3CG, PDK1, and PLK1 was increased in ESCs-cocultured replicative senescent RPE cells (Figure 10). After LY294002 was applied, PIK3CG, PDK1, PLK1, Cyclin A2, Cyclin B1, and Cyclin D1 were decreased while p53, p21WAF1/CIP1, and p16INK4a were increased (Figure 12) with the deceased proliferation and cell cycle transition (Figures 13B,D) and the increased positive rate of SA-β-GAL staining and levels of ROS and MMP (Figures 14B,D,F), suggesting that the cocultured ESCs may reverse the replicative senescence of RPE cells by activating the PI3K pathway, further downregulating p53, p21WAF1/CIP1, and p16INK4a and upregulating Cyclin A2, Cyclin B1, and Cyclin D1.
In summary, this study demonstrates for the first time that the ESCs can reverse the premature and replicative senescence of RPE cells by a direct coculture way, which may be achieved by upregulating the TGFβ and PI3K pathways, respectively, providing a new option for stem cell-based therapy of AMD and for anti-aging treatment via a young environment in the future.
Data Availability Statement
The RNA-seq data used in this study is publicly available in the Sequence Read Archive (SRA) database with the accession number PRJNA671771. Other raw data supporting the conclusions of this article will be made available by the authors on reasonable request, without undue reservation.
Ethics Statement
The studies involving human participants were reviewed and approved by the Ethics Committee of Zhongshan Ophthalmic Center, Sun Yat-sen University. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
Author Contributions
SW conceived the concept, conducted the experiments, and wrote the manuscript. YRL conducted the experiments, interpreted the results, and edited the manuscript. YL, CYL, and LY conducted the experiments. QW prepared the figures. YS, YC, and CL supervised the study. XW and ZW conceived the concept and edited the manuscript. All authors approved the manuscript.
Funding
This work was supported by the National Key R&D Program of China (2018YFC1106000).
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. | However, almost all drugs have off-target and bystander effects. In addition, delaying cellular senescence cannot clear existing senescent cells, so the progression of age-related diseases cannot be stopped by antioxidant drugs. Therefore, finding a method to effectively reverse RPE cellular senescence may provide new insight for AMD treatment.
The embryonic microenvironment can reverse somatic cellular senescence. The cloning of Dolly the sheep is a good example. A mature mammary gland cell can be reprogrammed into a stem cell under the influence of the embryonic microenvironment and can ultimately result in the cloning of a new individual. However, this embryonic microenvironment cannot be used clinically. The embryonic stem cells (ESCs) can mimic the role of the embryonic microenvironment in vitro. Studies have shown that ESC-conditioned medium can enhance the survival of bone marrow precursor cells (Guo et al., 2006) and reduce the aging phenotype of senescent skin fibroblasts (Bae et al., 2016). We previously demonstrated that the ESC-conditioned medium could promote the proliferation of corneal epithelial and endothelial cells in vitro (Liu et al., 2010; Lu et al., 2010), and showed that ESCs could maintain stemness in corneal epithelial cells by the transwell indirect coculture and the cell-contact-cell direct coculture ways, which was achieved by regulating the telomerase pathway (Zhou et al., 2011), with telomerase shortening being an important indicator of cellular senescence. In addition, we also demonstrated that the ESCs can reverse the malignant phenotype of tumors by a direct coculture way and promote the proliferation of normal skin tissues adjacent to tumors (Liu et al., 2019). Therefore, ESCs may have the potential to reverse the senescence of RPE cells.
| yes |
Gerontology | Can stem cell therapy reverse aging? | yes_statement | "stem" "cell" "therapy" can "reverse" "aging".. "aging" can be "reversed" through "stem" "cell" "therapy". | https://www.reuters.com/article/us-stemcells-aging/stem-cell-experiment-reverses-aging-in-rare-disease-idUSTRE61G4SC20100217 | Stem cell experiment reverses aging in rare disease | Reuters | Stem cell experiment reverses aging in rare disease
WASHINGTON (Reuters) - In a surprise result that can help in the understanding of both aging and cancer, researchers working with an engineered type of stem cell said they reversed the aging process in a rare genetic disease.
Researcher Xavier Nissan seen in his laboratory at the Institute for Stem cell Therapy and Exploration of Monogenic Diseases (I-Stem) in Evry, near Paris November 27, 2009. REUTERS/Gareth Watkins
The team at Children’s Hospital Boston and the Harvard Stem Cell Institute were working with a new type of cell called induced pluripotent stem cells or iPS cells, which closely resemble embryonic stem cells but are made from ordinary skin cells.
In this case, they wanted to study a rare, inherited premature aging disorder called dyskeratosis congenita. The blood marrow disorder resembles the better-known aging disease progeria and causes premature graying, warped fingernails and other symptoms as well as a high risk of cancer.
It is very rare and normally diagnosed between the ages of 10 and 30. About half of patients have bone marrow failure, which means their bone marrow stops making blood and immune cells properly.
One of the benefits of stem cells and iPS cells is that researchers can make them from a person with a disease and study that disease in the lab. Harvard’s Dr. George Daley and colleagues were making iPS cells from dyskeratosis congenita patients to do this.
But, reporting in Thursday’s issue of the journal Nature, they said the process of making the iPS cells appeared to reverse one of the key symptoms of the disease in the cells.
In this disease, the cells lose telomerase, an enzyme that helps maintain the telomeres. These are the little caps on the ends of the chromosomes that carry the DNA.
When telomeres unwind, a cell ages. This leads to disease and death.
BECOMING IMMORTAL
But in cancer, telomerase appears to help tumor cells become immortal and replicate out of control. Some experimental cancer drugs target telomerase.
A gene called TERC helps restore the telomeres and Daley’s team said it may be that tumor cells make use of TERC to become immortal.
In making the iPS cells and getting them to grow in the lab, Daley’s team discovered they had three times as much TERC as the diseased cells they were made from.
Simply turning the skin cells into iPS cells helped restore their damaged telomeres, Daley’s team reported. This in theory stops a major component of the aging process as well.
“We’re not saying we’ve found the fountain of youth, but the process of creating iPS cells recapitulates some of the biology that our species uses to rejuvenate itself in each generation,” Daley’s colleague Suneet Agarwal said in a statement.
Treatments that restore TERC may help dyskeratosis congenita patients, they said.
“This paper illustrates how reprogramming a patient’s skin cells into stem cells can teach us surprising lessons about human disease,” Daley added in a statement.
Agarwal says the team is now seeking funding to study this more.
Patients with dyskeratosis congenita often die when they get bone marrow transplants, Agarwal said.
“For these patients, and for patients with other bone marrow failure syndromes, it would be ideal to give them a gentler stem cell transplant from their own cells,” he said. | Stem cell experiment reverses aging in rare disease
WASHINGTON (Reuters) - In a surprise result that can help in the understanding of both aging and cancer, researchers working with an engineered type of stem cell said they reversed the aging process in a rare genetic disease.
Researcher Xavier Nissan seen in his laboratory at the Institute for Stem cell Therapy and Exploration of Monogenic Diseases (I-Stem) in Evry, near Paris November 27, 2009. REUTERS/Gareth Watkins
The team at Children’s Hospital Boston and the Harvard Stem Cell Institute were working with a new type of cell called induced pluripotent stem cells or iPS cells, which closely resemble embryonic stem cells but are made from ordinary skin cells.
In this case, they wanted to study a rare, inherited premature aging disorder called dyskeratosis congenita. The blood marrow disorder resembles the better-known aging disease progeria and causes premature graying, warped fingernails and other symptoms as well as a high risk of cancer.
It is very rare and normally diagnosed between the ages of 10 and 30. About half of patients have bone marrow failure, which means their bone marrow stops making blood and immune cells properly.
One of the benefits of stem cells and iPS cells is that researchers can make them from a person with a disease and study that disease in the lab. Harvard’s Dr. George Daley and colleagues were making iPS cells from dyskeratosis congenita patients to do this.
But, reporting in Thursday’s issue of the journal Nature, they said the process of making the iPS cells appeared to reverse one of the key symptoms of the disease in the cells.
In this disease, the cells lose telomerase, an enzyme that helps maintain the telomeres. These are the little caps on the ends of the chromosomes that carry the DNA.
When telomeres unwind, a cell ages. This leads to disease and death.
BECOMING IMMORTAL
| yes |
Gerontology | Can stem cell therapy reverse aging? | yes_statement | "stem" "cell" "therapy" can "reverse" "aging".. "aging" can be "reversed" through "stem" "cell" "therapy". | https://elifesciences.org/articles/71624 | Multi-omic rejuvenation of human cells by maturation phase ... | Abstract
Ageing is the gradual decline in organismal fitness that occurs over time leading to tissue dysfunction and disease. At the cellular level, ageing is associated with reduced function, altered gene expression and a perturbed epigenome. Recent work has demonstrated that the epigenome is already rejuvenated by the maturation phase of somatic cell reprogramming, which suggests full reprogramming is not required to reverse ageing of somatic cells. Here we have developed the first “maturation phase transient reprogramming” (MPTR) method, where reprogramming factors are selectively expressed until this rejuvenation point then withdrawn. Applying MPTR to dermal fibroblasts from middle-aged donors, we found that cells temporarily lose and then reacquire their fibroblast identity, possibly as a result of epigenetic memory at enhancers and/or persistent expression of some fibroblast genes. Excitingly, our method substantially rejuvenated multiple cellular attributes including the transcriptome, which was rejuvenated by around 30 years as measured by a novel transcriptome clock. The epigenome was rejuvenated to a similar extent, including H3K9me3 levels and the DNA methylation ageing clock. The magnitude of rejuvenation instigated by MPTR appears substantially greater than that achieved in previous transient reprogramming protocols. In addition, MPTR fibroblasts produced youthful levels of collagen proteins, and showed partial functional rejuvenation of their migration speed. Finally, our work suggests that optimal time windows exist for rejuvenating the transcriptome and the epigenome. Overall, we demonstrate that it is possible to separate rejuvenation from complete pluripotency reprogramming, which should facilitate the discovery of novel anti-ageing genes and therapies.
Editor's evaluation
This study describes a novel "maturation phase transient reprogramming" (MPTR) method to restore the epigenome of cells to a more youthful state. The authors demonstrate the effectiveness of the method to reverse several age-related changes including remodeling of the transcriptome. The method performs favorably compared to other transient reprogramming protocols, and the study will be of interest to developmental biologists as well as researchers who study ageing.
Introduction
Aging is the gradual decline in cell and tissue function over time that occurs in almost all organisms, and is associated with a variety of molecular hallmarks such as telomere attrition, genetic instability, epigenetic and transcriptional alterations, and an accumulation of misfolded proteins (López-Otín et al., 2013). This leads to perturbed nutrient sensing, mitochondrial dysfunction, and increased incidence of cellular senescence, which impacts overall cell function and intercellular communication, promotes exhaustion of stem cell pools, and causes tissue dysfunction (López-Otín et al., 2013). The progression of some aging related changes, such as transcriptomic and epigenetic ones, can be measured highly accurately and as such they can be used to construct “aging clocks” that predict chronological age with high precision in humans (Hannum et al., 2013; Horvath, 2013; Peters et al., 2015; Fleischer et al., 2018) and in other mammals (Stubbs et al., 2017; Thompson et al., 2017; Thompson et al., 2018). Since transcriptomic and epigenetic changes are reversible at least in principle, this raises the intriguing question of whether molecular attributes of aging can be reversed and cells phenotypically rejuvenated (Rando and Chang, 2012; Manukyan and Singh, 2012).
Induced pluripotent stem cell (iPSC) reprogramming is the process by which almost any somatic cell can be converted into an embryonic stem cell-like state. Intriguingly, iPSC reprogramming reverses many age-associated changes, including telomere attrition and oxidative stress (Lapasset et al., 2011). Notably, the epigenetic clock is reset back to approximately 0, suggesting reprogramming can reverse aging associated epigenetic alterations (Horvath, 2013). However, iPSC reprogramming also results in the loss of original cell identity and therefore function. By contrast, transient reprogramming approaches where the Yamanaka factors (Oct4, Sox2, Klf4, and c-Myc) are expressed for short periods of time may be able to achieve rejuvenation without loss of cell identity. Reprogramming can be performed in vivo (Abad et al., 2013), and indeed, cyclical expression of the Yamanaka factors in vivo can extend lifespan in progeroid mice and improves cellular function in wild-type mice (Ocampo et al., 2016). An alternative approach for reprogramming in vivo also demonstrated reversal of aging-associated changes in retinal ganglion cells and was capable of restoring vision in a glaucoma mouse model (Lu et al., 2020). More recently, in vitro transient reprogramming has been shown to reverse multiple aspects of aging in human fibroblasts and chondrocytes (Sarkar et al., 2020). Nevertheless, the extent of epigenetic rejuvenation achieved by previous transient reprogramming methods has been modest (~3 years) compared to the drastic reduction achieved by complete iPSC reprogramming. A more detailed comparison of previous methods is provided in Supplementary file 1. Here, we establish a novel transient reprogramming strategy where Yamanaka factors are expressed until the maturation phase (MP) of reprogramming before abolishing their induction (maturation phase transient reprogramming, MPTR), with which we were able to achieve robust and very substantial rejuvenation (~30 years) whilst retaining original cell identity overall.
Results
Transiently reprogrammed cells reacquire their initial cell identity
Reprogramming can be divided into three phases: the initiation phase (IP) where somatic expression is repressed and a mesenchymal-to-epithelial transition occurs; the MP, where a subset of pluripotency genes becomes expressed; and the stabilization phase (SP), where the complete pluripotency program is activated (Samavarchi-Tehrani et al., 2010; Figure 1A). Previous attempts at transient reprogramming have only reprogrammed within the IP (Ocampo et al., 2016; Sarkar et al., 2020). However, reprogramming further, up to the MP, may achieve more substantial rejuvenation. To investigate the potential of MPTR to reverse aging phenotypes, we generated a doxycycline-inducible polycistronic reprogramming cassette that encoded Oct4, Sox2, Klf4, c-Myc, and GFP. By using a polycistronic cassette, we could ensure that individual cells were able to express all four Yamanaka factors. This reprogramming cassette was capable of generating iPSC lines from human fibroblasts and induced a substantial reduction of DNA methylation age throughout the reprogramming process, consistent with previous work using a different reprogramming system (Olova et al., 2019; Figure 1A). Specifically, DNA methylation age as measured using the multi-tissue epigenetic clock (Horvath, 2013) was substantially reduced relatively early in the reprogramming process (which takes about 50 days to complete in this system), with an approximate rejuvenation of 20 years by day 10 and 40 years by day 17 (Figure 1A). Similar results were obtained using the skin and blood clock (Horvath et al., 2018; Figure 1—figure supplement 1A). Interestingly, other epigenetic clocks were rejuvenated later in the reprogramming process. This may suggest that the epigenome is rejuvenated in stages; however, we note that these other epigenetic clocks were not trained on fibroblast data. We therefore focussed on the window between days 10 and 17 to develop our MPTR protocol for human fibroblasts (Figure 1B), predicting that this would enable substantial reversal of aging phenotypes whilst potentially allowing cells to regain original cell identity. Beyond this window, cells would enter the SP and the endogenous pluripotency genes would become activated, preventing the cessation of reprogramming by withdrawing doxycycline alone (Samavarchi-Tehrani et al., 2010). The reprogramming cassette was introduced into fibroblasts from three middle-aged donors (chronologically aged 38, 53, and 53 years old and epigenetically aged 45, 49, and 55 years old, according to the multi-tissue epigenetic clock Horvath, 2013) by lentiviral transduction before selecting successfully transduced cells by sorting for GFP. We then reprogrammed the fibroblasts for different lengths of time (10, 13, 15, or 17 days) by supplementing the media with 2 µg/ml doxycycline and carried out flow sorting to isolate cells that were successfully reprogramming (labeled ‘transient reprogramming intermediate’: SSEA4 positive, CD13 negative) as well as the cells that had failed to reprogram (labeled ‘failing to transiently reprogram intermediate’: CD13 positive, SSEA4 negative). At this stage, approximately 25% of the cells were successfully reprogramming and approximately 35% of the cells were failing to reprogram, whilst the remainder were double positive or double negative (Figure 1—figure supplement 1B). Cells were harvested for DNA methylation array or RNA-seq analysis and also replated for further culture in the absence of doxycycline to stop the expression of the reprogramming cassette. Further culture for a period of 4–5 weeks in the absence of doxycycline generated ‘transiently reprogrammed fibroblasts,’ which had previously expressed SSEA4 at the intermediate stage, as well as ‘failed to transiently reprogram fibroblasts,’ which had expressed the reprogramming cassette (GFP-positive cells) but failed to express SSEA4. As a negative control, we simultaneously ‘mock infected’ (subject to transduction process but without lentiviruses) populations of fibroblasts from the same donors. These cells underwent an initial flow sort for viability (to account for the effects of the GFP sort) before culture under the same conditions as the reprogramming cells and flow sorting for CD13 (cells harvested at this stage generated a ‘negative control intermediate’ for methylome and transcriptome analyses). Finally, these ‘negative control intermediate’ cells were grown in the absence of doxycycline for the same length of time as experimental samples to account for the effects of extended cell culture, generating ‘negative control fibroblasts’ (Figure 1B).
(A) Mean DNA methylation age (calculated using the multi-tissue clock; Horvath, 2013) throughout the reprogramming process where cells were transduced with our tetO-GFP-hOKMS vector and treated continuously with 2 µg/ml of doxycycline. Reprogramming is divided in three distinct phases: initiation phase (IP), maturation phase (MP), and stabilization phase (SP). DNA methylation age decreased substantially during the MP of reprogramming in cells that were successfully reprogramming (magenta line) but not in control cells (yellow and orange lines represent non-transduced cells and cells expressing hOKMS but failing to reprogram as indicated by cell surface markers, respectively). Points represent the mean and error bars the standard deviation. N=3 biological replicates per condition, where fibroblasts were derived from different donors. N=2 biological replicates for the iPSC time point (day 51). (B) Experimental scheme for maturation phase transient reprogramming (MPTR). The tetO-GFP-hOKMS reprogramming construct was introduced into fibroblasts from older donors by lentiviral transduction. Alternatively, cells were ‘mock infected’ as a negative control. Following this, cells were grown in the presence of 2 µg/ml doxycycline to initiate reprogramming. At several time points during the MP, cells were flow sorted and successfully reprogramming cells (CD13− SSEA4+) and cells that were failing to reprogram (CD13+ SSEA4−) were collected for analysis. These were termed ‘transient reprogramming intermediate’ and ‘failing to transiently reprogram intermediate,’ respectively. Sorted cells were also further cultured, and grown in the absence of doxycycline for at least 4 weeks—these were termed ‘transiently reprogrammed’ (CD13− SSEA4+) or ‘failed to transiently reprogram’ (CD13+ SSEA4−). (C) Phase-contrast microscope images of cells after doxycycline treatment (transient reprogramming intermediate) and after withdrawal of doxycycline (transiently reprogrammed) as described in (B). The morphology of some cells changed after doxycycline treatment. These cells appeared to form colonies, which became larger with longer exposure to doxycycline. After sorting, these cells were cultured in medium no longer containing doxycycline, and appeared to return to their initial fibroblast morphology. (D) Roundness ratio of cells before, during, and after MPTR (with 13 days of reprogramming). Roundness ratio was calculated by dividing maximum length by perpendicular width. Fibroblasts became significantly rounder during MPTR and returned to a more elongated state upon the completion of MPTR. Values from individual cells have been represented as violin plots. Points represent mean values and are connected with lines. Significance was calculated with a Tukey’s range test. Representative 3D renderings of cells (generated using Volocity) before, during, and after successful transient reprogramming are included below the plot. CD13 is colored in green, SSEA4 is colored in red, and DAPI staining is colored in blue. White scale bars represent a distance of 20 µm. (E) Principal component analysis of transient reprogramming and reference reprogramming sample transcriptomes (light blue to dark blue and black crosses, data from Banovich et al., 2018, Fleischer et al., 2018 and our novel Sendai reprogramming data set). Reference samples form a reprogramming trajectory along PC1. In the Sendai reprogramming reference data set, cells that were not reprogramming (CD13+ SSEA4−) were also profiled and clustered midway along PC1 suggesting some transcriptional changes had still occurred in these cells. Transient reprogramming samples moved along this trajectory with continued exposure to doxycycline (light magenta points) and returned to the beginning of the trajectory after withdrawal of doxycycline (magenta points). Control samples (yellow and orange points) remained at the beginning of the trajectory throughout the experiment. (F) Mean gene expression levels for the fibroblast specific gene FSP1 and the iPSC specific gene NANOG. Transiently reprogrammed samples expressed these genes at levels similar to control fibroblasts. Bars represent the mean and error bars the standard deviation. Samples transiently reprogrammed for 10, 13, 15, or 17 days were pooled. The number of distinct samples in each group is indicated in brackets. (G) Principal component analysis of transient reprogramming (magenta points) and reference reprogramming sample methylomes (light blue to dark blue and black crosses, data from Banovich et al., 2018, Ohnuki et al., 2014 and our novel Sendai reprogramming data set). Reference samples formed a reprogramming trajectory along PC1. Transient reprogramming samples moved along this trajectory with continued exposure to doxycycline (light magenta points) and returned to the beginning of the trajectory after withdrawal of doxycycline (magenta points). Control samples (yellow and orange points) remained at the beginning of the trajectory throughout the experiment. (H) Mean DNA methylation levels across the fibroblast-specific gene FSP1 and the iPSC-specific gene POU5F1 (encoding OCT4). Transiently reprogrammed samples had methylation profiles across these genes that resemble those found in fibroblasts. Gray bars and black bars indicate the locations of Ensembl annotated promoters and genes, respectively. Samples transiently reprogrammed for 10, 13, 15, or 17 days were pooled for visualization purposes. The number of distinct samples in each group is indicated in brackets. iPSC, induced pluripotent stem cell; MPTR, maturation phase transient reprogramming.
After reprogramming for 10–17 days, we found the fibroblasts had undergone dramatic changes in morphology. Upon visual inspection using a light microscope it appeared that the cells had undergone a mesenchymal-to-epithelial like transition and were forming colony structures that progressively became larger with longer periods of reprogramming, consistent with the emergence of the early pluripotency marker SSEA4. After sorting the cells and culturing in the absence of doxycycline, we found they were able to return to their initial fibroblast morphology, showing that morphological reversion is possible even after 17 days of reprogramming (Figure 1C). We quantified the morphology changes by calculating a ratio indicative of ‘roundness’ (maximum length divided by perpendicular width) for individual cells before, during, and after MPTR (Figure 1D and Figure 1—figure supplement 1C). We found that successfully reprogramming cells became significantly rounder at the intermediate stages of MPTR compared to the starting fibroblasts and then returned to an elongated state upon the completion of MPTR. Of note, we found that there was no significant difference in roundness between cells before and after MPTR, further supporting that fibroblasts were able to return to their original morphology. In comparison, failing to reprogram and negative control cells did not undergo as substantial a change during MPTR and were significantly more elongated at the intermediate stage (Supplementary file 2).
We investigated further the identity of the cells after MPTR by conducting DNA methylation array analysis and RNA sequencing to examine their methylomes and transcriptomes, respectively. We included published reprogramming data sets in our analysis as well as a novel reprogramming data set that we generated based on Sendai virus delivery of the Yamanaka factors to act as a reference (Fleischer et al., 2018; Ohnuki et al., 2014; Banovich et al., 2018). Principal component analysis (PCA) using expression values of all genes in the transcriptome separated cells based on the extent of reprogramming and the reference data sets formed a reprogramming trajectory along PC1 (Figure 1E). Transient reprogramming intermediate cells (collected after the reprogramming phase but before the reversion phase) clustered halfway along this trajectory, implying that cells lose aspects of the fibroblast transcriptional program and/or gain aspects of the pluripotency transcriptional program, which is consistent with the loss of the fibroblast surface marker CD13 and gain of the iPSC surface marker SSEA4. We note that the different time points for the transient reprogramming intermediate samples clustered closer together when examining their transcriptomes compared to their DNA methylomes. This suggests that changes in the DNA methylome occur more gradually, whereas changes in the transcriptome occur in more discrete stages. Notably, upon completion of MPTR, transiently reprogrammed samples clustered at the beginning of this trajectory showing that these samples once again transcriptionally resemble fibroblasts rather than reprogramming intermediates or iPSCs (Figure 1E). Similar findings were made when the reference data sets were excluded (Figure 1—figure supplement 1D). For example, transiently reprogrammed cells did not express the pluripotency marker NANOG and expressed high levels of the fibroblast marker FSP1 (Figure 1F). Notably, NANOG was temporarily expressed at high levels at the intermediate stages of transient reprogramming alongside FSP1, suggesting that these cells simultaneously possessed some transcriptional attributes of both fibroblasts and iPSCs.
Similarly, PCA of the methylomes separated cells based on the extent of reprogramming and the reference data sets formed a reprogramming trajectory along PC1. PC2 separated mid reprogramming samples from initial fibroblasts and final iPSCs and was driven by CpG sites that are temporarily hypermethylated or hypomethylated during reprogramming. These CpG sites appeared near genes associated with asymmetric protein localization according to gene ontology analysis. As with the transcriptome, intermediate samples from our transient reprogramming experiment clustered along this reprogramming trajectory (Figure 1G), showing that cells move epigenetically toward pluripotency. Notably, the transiently reprogrammed samples returned back to the start of this trajectory (with the reference fibroblast samples) revealing that they epigenetically resembled fibroblasts once again. Like the transcriptome, similar findings were made when the reference data sets were excluded (Figure 1—figure supplement 1E). We found typical regions that change during reprogramming were fibroblast-like after transient reprogramming (Takahashi et al., 2007), such as the promoter of POU5F1 being hypermethylated and the promoter of FSP1 being hypomethylated in our transiently reprogrammed cells (Figure 1H). Notably, the POU5F1 promoter was temporarily demethylated and the FSP1 promoter remained lowly methylated at the intermediate stages of transient reprogramming, suggesting that these intermediate stage cells possess some epigenetic features of both fibroblasts and iPSCs. Taken together, these data demonstrate that fibroblasts can be transiently reprogrammed to the MP and then revert to a state that is morphologically, epigenetically, and transcriptionally similar to the starting cell identity. To our knowledge, this is the first method for MPTR, where Yamanaka factors are transiently expressed up to the MP of reprogramming before the expression of the factors is abolished.
Epigenetic memory and transcriptional persistence are present at the intermediate stages of transient reprogramming
Though transiently reprogrammed fibroblasts temporarily lost their cell identity (becoming SSEA4 positive and CD13 negative), they were able to reacquire it once the reprogramming factors were removed, suggesting that they retained memory of their initial cell identity. To examine the source of this memory, we initially defined fibroblast-specific and iPSC-specific gene sets using differential expression analysis on fibroblasts before and after complete reprogramming with our system (Figure 2—figure supplement 1A). We subsequently analyzed the expression of these gene sets throughout MPTR and observed that fibroblast-specific genes were temporarily downregulated whilst iPSC-specific genes were temporarily upregulated (Figure 2A). As expected, these gene sets were further downregulated and upregulated during complete reprogramming, respectively (Figure 2A). We note that this approach generalizes the expression changes and as a result, may obscure subclusters within these gene sets that display different expression trajectories. Therefore, we analyzed the expression levels of individual genes to gain further insight into these gene sets. After performing hierarchical clustering, we observed that the majority of genes within the fibroblast-specific gene set were temporarily downregulated during transient reprogramming (2803 genes out of 4178). However, we also observed that the remaining genes formed two additional clusters that were temporarily upregulated (961 genes) and persistently expressed (414 genes), respectively (Figure 2B, Figure 2—figure supplement 1B and Supplementary file 3). We also clustered the genes within the iPSC-specific gene set and observed that the majority of iPSC genes were upregulated in transient reprogramming intermediate cells to levels similar to iPSCs and the remaining genes were not yet activated (Figure 2—figure supplement 1C). We subsequently performed gene ontology analysis on the fibroblast-specific gene clusters and found that the temporarily upregulated cluster was enriched for gene ontology categories such as ‘response to lipopolysaccharide’ suggesting that inflammatory signaling pathways are temporarily activated during transient reprogramming, likely in response to the reprogramming factors. Interestingly, the persistently expressed gene cluster was enriched for gene ontology categories such as extracellular matrix and collagen fibril organization, suggesting that some aspects of fibroblast function are maintained during transient reprogramming at least at the transcriptional level (Figure 2—figure supplement 1D).
Epigenetic memory at enhancers and persistent fibroblast gene expression may allow cells to return to their initial identity.
(A) The mean expression levels of fibroblast-specific and iPSC-specific gene sets during transient reprogramming and complete reprogramming. Error bars represent the standard deviation. (B) Heatmap examining the expression of fibroblast-specific genes in cells before (light blue group), during (light magenta group, transient reprogramming intermediate cells), and after (magenta group, transiently reprogrammed fibroblasts) transient reprogramming as well as in iPSCs (dark blue group). The number of days of reprogramming is indicated above the heatmap where applicable. The majority of fibroblast genes are downregulated at the intermediate stages of transient reprogramming. However, some fibroblast genes are persistently expressed or temporarily upregulated at this stage. (C) Mean DNA methylation levels across enhancers linked to the three clusters of fibroblast genes during transient reprogramming and complete reprogramming. DNA methylation levels across enhancers remain unchanged during transient reprogramming regardless of the expression of their associated genes. In comparison, DNA methylation levels across these regions increase during complete reprogramming. Error bars represent the standard deviation. (D) Heatmap examining the DNA methylation levels of fibroblast-specific enhancers in cells before (light blue group), during (light magenta group), and after (magenta group) transient reprogramming as well as in iPSCs (dark blue group). Each sample was plotted as a single column, whether reprogrammed for 10, 13, 15, or 17 days. Fibroblast enhancers became hypermethylated during complete reprogramming but were still demethylated at the intermediate stages of transient reprogramming. Fibroblast-specific enhancers were defined as enhancers that are active in fibroblasts but no longer active in iPSCs (become inactive, poised, or repressed) based on Ensembl regulatory build annotations. (E) The mean expression and enhancer methylation levels of example genes during transient reprogramming and complete reprogramming. MMP1 is a gene that demonstrates epigenetic memory as it is temporarily downregulated during transient reprogramming and its enhancer remains demethylated. COL1A2 is a gene that demonstrates transcriptional persistence as it remains expressed throughout transient reprogramming. iPSC, induced pluripotent stem cell.
We also questioned whether the epigenome played a role in the retention of memory of the initial cell type, particularly for genes that were temporarily downregulated. We therefore examined the DNA methylation levels at regulatory elements linked to the fibroblast-specific genes. We used the Ensembl Regulatory Build (Zerbino et al., 2015) to obtain the locations of promoter and enhancer elements as well as their activity status in dermal fibroblasts and iPSCs. We then focussed on promoter and enhancer elements that are active in fibroblasts and linked them to the nearest transcription start site (within 1 kb for promoters and 1 mb for enhancers). The promoters associated with fibroblast genes remained lowly methylated throughout transient reprogramming and complete reprogramming regardless of the gene cluster, suggesting that promoter methylation does not contribute substantially toward memory (Figure 2—figure supplement 1E). In contrast, enhancers associated with fibroblast genes gained DNA methylation but only during complete reprogramming and not during transient reprogramming (Figure 2C and Figure 2—figure supplement 1F). This was the case for enhancers linked to the genes in all three clusters and in the case of temporarily downregulated genes, the lack of hypermethylation may confer epigenetic memory at a time when the associated genes are transcriptionally repressed. We also examined fibroblast-specific enhancers in general and defined these as enhancers that are active in fibroblasts but are no longer active in iPSCs. Similar to the previous analysis, we found that DNA methylation was relatively dynamic at fibroblast-specific enhancers. Approximately half of all fibroblast-specific enhancers (2351 out of the covered 4204 enhancers) gained DNA methylation during iPSC reprogramming. However, even at day 17 of the reprogramming process (the longest transient reprogramming intermediate tested here), these enhancers still remained hypomethylated (Figure 2D). Overall, we hypothesize that both epigenetic memory at genes such as MMP1 (Figure 2E) and transcriptional persistence at genes such as COL1A2 (Figure 2E) enable cells to return to their original cell type once the reprogramming factors are withdrawn. Taken together, these two attributes may act as the source of memory for initial cell identity during a time when the somatic transcriptional program is otherwise mostly repressed and somatic proteins such as CD13 are lost (Polo et al., 2012; David and Polo, 2014).
We next investigated the transcriptome to determine if there was any evidence of rejuvenation in this omic layer. We initially identified genes that significantly correlated with age in a reference fibroblast aging data set (Fleischer et al., 2018) and used genes with a significant Pearson correlation after Bonferroni correction (p≤0.05) to carry out PCA (3707 genes). The samples primarily separated by age and reference fibroblast samples formed an aging trajectory. The transiently reprogrammed samples clustered closer to the young fibroblasts along PC1 than the negative control samples (Figure 3A). Based on the relationship between PC1 and age in the reference data set, we inferred that transient reprogrammed samples were approximately 40 years younger than the negative control samples (Figure 3B). To further quantify the extent of rejuvenation, we investigated the effect of MPTR using transcription clocks. Unfortunately, existing transcription clocks failed to accurately predict the age of our negative control samples. This may be due to batch effects such as differences in RNA-seq library preparation and data processing pipelines. To overcome this problem, we trained a transcription age-predictor using random forest regression on published fibroblast RNA-seq data from donors aged 1–94 years old that was batch corrected to our transient reprogramming data set (Fleischer et al., 2018). The transcription age predictor was trained on transformed age, similar to the Horvath epigenetic clock, to account for the accelerated aging rate during childhood and adolescence (Horvath, 2013). The final transcription age predictor had a median absolute error of 12.57 years (Figure 3—figure supplement 1A), this error being higher than that of the epigenetic clock consistent with previous transcription age predictors (Peters et al., 2015; Fleischer et al., 2018). Using our predictor, we found that transient reprogramming reduced mean transcription age by approximately 30 years (Figure 3C). We also observed a moderate reduction in transcription age in cells that failed to transiently reprogram (SSEA4 negative at the intermediate time point), suggesting expression of the reprogramming factors alone was capable of rejuvenating some aspects of the transcriptome. Interestingly, we observed that MPTR with longer reprogramming phases reduced the extent of rejuvenation, suggesting that 10 or 13 days may be the optimum for transcriptional rejuvenation. We note that the reduction in transcription age from MPTR appears to be greater than that recently achieved by transient transfection of the Yamanaka factors (Sarkar et al., 2020), which was by approximately 10 years according to our transcription age predictor (Figure 3—figure supplement 1B), consistent with our approach of reprogramming further into the MP rather than only up to the end of the IP. Recently, a novel transcription clock called BiT age clock has been defined (Meyer and Schumacher, 2021), which has been trained on binarized gene expression levels. This clock has a very low median absolute error, which is comparable to that of epigenetic clocks. We ran a retrained version of the BiT age clock on our data set and made similar findings to our random forest-based clock. Of note, we observed that transient reprogramming also rejuvenated the BiT age clock by approximately 20 years relative to negative controls and that 10 or 13 days of reprogramming was optimal for maximal transcriptional rejuvenation (Figure 3—figure supplement 1C).
(A) Principal component analysis (PCA) of fibroblast aging-associated gene expression levels in transient reprogramming (magenta) and reference aging fibroblast samples (light blue-dark blue). Reference samples formed an aging trajectory along PC1. Transiently reprogrammed samples located closer to young fibroblasts than negative control samples did (yellow and orange), suggesting they were transcriptionally younger. (B) PC1 values from the PCA of fibroblast aging-associated gene expression levels and their equivalent age based on the reference aging fibroblast samples. PC1 values were greater in transiently reprogrammed samples than negative control and failed to transiently reprogram samples and as result these samples appear to be younger. Bars represent the mean and error bars represent the standard deviation. (C) Mean transcription age calculated using a custom transcriptome clock (median absolute error = 12.57 years) for negative control samples (yellow), samples that expressed OSKM but failed to reprogram based on cell surface markers (orange) and cells that were successfully transiently reprogrammed (magenta) as described in Figure 1B for 10, 13, 15, or 17 days. The number of distinct samples in each group is indicated in brackets. Bars represent the mean and error bars the standard deviation. Statistical significance was calculated with Mann-Whitney U-tests. (D) The mean expression levels of all genes in transiently reprogrammed samples with 13 days of reprogramming compared to those in corresponding negative control samples. In addition, genes have been color coded by their expression change with age. Genes that upregulate with age were downregulated with transient reprogramming and genes that downregulate with age were upregulated with transient reprogramming. Notable example genes have been highlighted. The number of distinct samples in each group is indicated in brackets. (E) The expression levels of collagen genes that were restored to youthful levels after transient reprogramming with 13 days of reprogramming. Bars represent the mean and error bars the standard deviation. The number of distinct samples in each group is indicated in square brackets. Significance was calculated with a two-sided Mann-Whitney U-test. (F) Boxplots of the protein levels of collagen I and IV in individual cells after transient reprogramming for 10 or 13 days calculated based on fluorescence intensity within segmented cells following immunofluorescence staining. Boxes represent upper and lower quartiles and central lines the median. The protein levels of collagen I and IV increased after transient reprogramming. The number of distinct samples in each group is indicated in square brackets. Representative images are included (bottom panel). CD44 is colored in green, collagen I and IV are colored in red, and DAPI staining is colored in blue. Significance was calculated with a two-sided Mann-Whitney U-test. (G) The migration speed of fibroblasts in a wound healing assay. Migration speed was significantly lower in negative control fibroblasts from middle-aged donors compared to fibroblasts from young donors (aged 20–22). Transient reprogramming improved the migration speed in some samples but had no effect in others. Technical replicates were averaged, and the mean values have been presented as boxplots where the boxes represent the upper and lower quartiles and the central lines the median. Significance was calculated with a Tukey’s range test.
We further profiled the effects of MPTR with 13 days of reprogramming (due to its apparent significance) by examining the whole transcriptome. This was achieved by comparing the expression levels of genes in transiently reprogrammed cells to those in negative control cells and subsequently overlaying the expression change due to age calculated using the reference aging data set (Fleischer et al., 2018). As expected, we observed an overall reversal of the aging trends, with genes upregulated during aging being downregulated following transient reprogramming and genes downregulated during aging being upregulated following transient reprogramming (Figure 3D, Figure 3—figure supplement 1D). Notably, structural proteins downregulated with age that were upregulated upon transient reprogramming included the cytokeratins 8 and 18 as well as subunits of collagen IV.
The production of collagens is a major function of fibroblasts (Humphrey et al., 2014), thus we examined the expression of all collagen genes during fibroblast aging and after transient reprogramming with 13 days of reprogramming (Figure 3E). As shown previously (Varani et al., 2006; Lago and Puzzi, 2019), we found collagen I and IV were downregulated with age, with collagen IV demonstrating a more dramatic reduction. Notably, the expression of both genes was restored to youthful levels after transient reprogramming, though this was not significant for collagen I likely due to the small expression difference associated with age and lower number of samples (Figure 3E). We then assessed by immunofluorescence whether this increased mRNA expression resulted in increased protein levels and indeed found that transient reprogramming resulted in an increase in collagen I and IV protein toward more youthful levels (Figure 3F). Fibroblasts are also involved in wound healing responses (Li and Wang, 2011), so we investigated the impact of transient reprogramming on this function using an in vitro wound healing assay (Figure 3G and Figure 3—figure supplement 1E). We found that migration speed was significantly reduced in our control fibroblasts from middle-aged donors compared to fibroblasts from young donors (aged 20–22 years old). Transient reprogramming improved the median migration speed, however, the individual responses were quite variable and in some cases migration speed was improved and in other cases it was unaffected. Interestingly, this did not appear to correlate with other aging measures such as transcription and methylation clocks. Our data show that transient reprogramming followed by reversion can rejuvenate fibroblasts both transcriptionally and at the protein level, at least based on collagen production, and functionally at least in part. This indicates that our rejuvenation protocol can, in principle, restore youthful functionality in human cells.
After finding evidence of transcriptomic rejuvenation, we sought to determine whether there were also aspects of rejuvenation in the epigenome. We initially examined global levels of H3K9me3 by immunofluorescence. H3K9me3 is a histone modification associated with heterochromatin that has been previously shown to be reduced globally with age in a number of organisms (Ni et al., 2012), including in human fibroblasts (O’Sullivan et al., 2010; Scaffidi and Misteli, 2006). We were able to confirm this observation and found that MPTR was able to substantially reverse this age-associated reduction back to a level comparable with fibroblasts from younger donors (with a mean age of 33 years old). Both 10 and 13 days of transient reprogramming increased global levels of H3K9me3 suggesting that this epigenetic mark, similar to the transcriptome, has a relatively broad window for rejuvenation by transient reprogramming. We also observed a slight increase in H3K9me3 levels in cells that failed to transiently reprogram, suggesting that expression of the reprogramming factors alone is capable of partially restoring this epigenetic mark (Figure 4A), as was observed for our transcriptome-based age-predictor (Figure 3C). The magnitude of rejuvenation in H3K9me3 levels in our transiently reprogrammed cells is similar to that observed from IP transient reprogramming (Sarkar et al., 2020).
Optimal transient reprogramming can reverse age-associated changes in the epigenome.
(A) Boxplots of the levels of H3K9me3 in individual cells calculated based on fluorescence intensity within nuclei (segmented using DAPI). The levels of H3K9me3 were found to decrease with age and increase after transient reprogramming for 10 or 13 days. Boxes represent upper and lower quartiles and central lines the median. The number of distinct samples in each group is indicated in square brackets. Representative images are included (right panel). H3K9me3 is colored in green and DAPI staining is colored in gray scale. Significance was calculated with a two-sided Mann-Whitney U-test. (B) Mean DNA methylation age of samples after transient reprogramming calculated using the multi-tissue clock (Horvath, 2013). DNA methylation age substantially reduced after 13 days of transient reprogramming. Shorter and longer lengths of transient reprogramming led to smaller reductions in DNA methylation age. Bars represent the mean and error bars represent the standard deviation. The outlier in the 13 days of transient reprogramming group was excluded from calculation of the mean and standard deviation. Significance was calculated with a two-sided Mann-Whitney U-test with (in brackets) and without the outlier. The number of distinct samples in each group is indicated in brackets beneath the bars. (C) Mean telomere length of samples after transient reprogramming calculated using the telomere length clock (O’Sullivan et al., 2010). Telomere length either did not change or was slightly reduced after transient reprogramming. Bars represent the mean and error bars represent the standard deviation. Significance was calculated with a two-sided Mann-Whitney U test. (D) Mean DNA methylation levels across a rejuvenated age-hypomethylated region. This region is found within the IRX5 promoter. Samples transiently reprogrammed for 13 days were pooled for visualization purposes. The number of distinct samples in each group is indicated in brackets. (E) Mean DNA methylation levels across rejuvenated age-hypermethylated regions. These regions are found within the GAD1 promoter and HOXB locus. Samples transiently reprogrammed for 13 days were pooled for visualization purposes. The number of distinct samples in each group is indicated in brackets. (F) The overlap in rejuvenated methylation CpG sites and rejuvenated expression genes. Rejuvenated CpG sites were annotated with the nearest gene for this overlap analysis. The universal set was restricted to genes that were annotated to CpG sites in the DNA methylation array. Fisher’s exact test was used to calculate the significance of the overlap. The six genes that were found in both sets are listed along with the direction of their DNA methylation (red) and gene expression (blue) change with age.
We next applied the epigenetic clock, a multi-tissue age predictor that predicts age based on the DNA methylation levels at 353 CpG sites (Horvath, 2013), to our data. Notably, with 13 days of transient reprogramming, we observed a substantial reduction of the median DNA methylation age—by approximately 30 years, quantitatively the same rejuvenation as we saw in the transcriptome (Figure 4B). A shorter period of transient reprogramming (10 days) resulted in a smaller reduction of DNA methylation age, consistent with our results profiling DNA methylation age throughout the reprogramming process, where DNA methylation age gradually reduced throughout the MP (Figure 1A). This epigenetic rejuvenation is potentially promoted by de novo methylation and active demethylation as the de novo methyltransferases and TET enzymes are upregulated during the MP (Figure 4—figure supplement 1A). Potentially, some of the rejuvenating mechanisms occurring in MPTR may mirror those that occur during embryonic development as epigenetic rejuvenation during embryonic development coincides with de novo methylation of the genome (Kerepesi et al., 2021). Similar to the transcription clocks, we also observed a smaller reduction in DNA methylation age with longer transient reprogramming times, suggesting that some aspects of the observed epigenetic rejuvenation are lost during the reversion phase of our MPTR protocol. Potentially, extended reprogramming (for 15 or 17 days) may make reversion more difficult and result in cellular stresses that ‘re-age’ the methylome during the process. Similar results were obtained using the skin and blood clock and the Weidner clock (Horvath et al., 2018; Weidner et al., 2014; Figure 4—figure supplement 1B). Other epigenetic clocks were not rejuvenated by MPTR; however, we note that these clocks either rejuvenate later in the reprogramming process or are unaffected by reprogramming (Figure 1—figure supplement 1A).
Telomeres are protective structures at the ends of chromosomes that consist of repetitive sequences. Telomere length decreases with age due to cell proliferation in the absence of telomerase enzymes and is restored upon complete iPSC reprogramming (Lapasset et al., 2011). To investigate the effect of transient reprogramming on telomere length, we used the telomere length clock, which predicts telomere length based on the methylation levels at 140 CpG sites (Lu et al., 2019b). We found that MPTR does not affect telomere length and, in some cases, slightly reduces it (Figure 4C). This is consistent with our results profiling telomere length throughout complete reprogramming using our doxycycline inducible system, where telomere length did not increase until the SP (Figure 4—figure supplement 1C). This coincides with the expression of telomerase during reprogramming, where it is weakly expressed during the later stages of the MP and only strongly expressed during the SP (Figure 4—figure supplement 1D).
Next, we investigated the locations of the rejuvenated CpG sites and found that most were individual sites spread across the genome (Figure 4—figure supplement 1E). Some of these individual CpG sites may be part of larger regions of rejuvenated methylation, which we are unable to fully detect due to the targeted nature of DNA methylation array profiling; however, we found a few small clusters of rejuvenated CpG sites. We found that a small region in the IRX5 promoter became demethylated with age and transient reprogramming was able to partially remethylate this region (Figure 4D). IRX5 is involved in embryonic development so demethylation of its promoter with age may lead to inappropriate expression (Costantini et al., 2005; Cheng et al., 2005). We also found two regions that became hypermethylated with age and were demethylated by transient reprogramming (Figure 4E). One of these regions is in the GAD1 promoter; encoding an enzyme that catalyzes the conversion of gamma-aminobutyric acid into glutamic acid (Bu et al., 1992). The other region is within the HOXB locus, involved in anterior-posterior patterning during development (Pearson et al., 2005). Finally, we examined whether there was any overlap between the epigenetic and transcriptional rejuvenation. We therefore annotated the rejuvenated CpG sites with the nearest gene and then overlapped this gene set with the list of genes with rejuvenated expression. We found that there was a significant overlap between these two groups suggesting that epigenetic rejuvenation and transcriptional rejuvenation may be partially linked (Figure 4F). We further examined these overlapping genes and found that several had structural roles. These included FBN2 and TNXB, which encode components of the extracellular matrix (Zhang et al., 1994; Bristow et al., 1993) and SPTB, which encodes a component of the cytoskeletal network (Garbe et al., 2007). WISP2 was also rejuvenated transcriptionally and epigenetically; this gene is an activator of the canonical WNT pathway (Grünberg et al., 2014) and has recently been shown to inhibit collagen linearisation (Janjanam, 2021). ASPA and STRA6 respectively encode an enzyme that hydrolyses N-acetyl-I-aspartate and a vitamin A receptor (Bitto et al., 2007; Amengual et al., 2014). Neither of these genes have obvious roles in fibroblasts. We note that additional overlaps between epigenetic and transcriptional rejuvenation may exist that are not observed in our study due to the limited genomic coverage of DNA methylation arrays. Overall, our data demonstrate that transient reprogramming for 13 days (but apparently not for longer or shorter periods) represents a ‘sweet spot’ that facilitates partial rejuvenation of both the methylome and transcriptome, reducing epigenetic and transcriptional age by approximately 30 years.
Discussion
Here, we have developed a novel method, MPTR, where the Yamanaka factors are ectopically expressed until the MP of reprogramming is reached, and their induction is then withdrawn. MPTR rejuvenates multiple molecular hallmarks of aging robustly and substantially, including the transcriptome, epigenome, functional protein expression, and cell migration speed. Previous attempts at transient reprogramming have been restricted to the IP in order to conserve initial cell identity (Ocampo et al., 2016; Lu et al., 2020; Sarkar et al., 2020). This is a valid concern as fully reprogrammed iPSCs can be difficult to differentiate into mature adult cells and instead these differentiated cells often resemble their fetal counterparts (Hrvatin et al., 2014). With our approach, cells temporarily lose their identity as they enter the MP but, importantly, reacquire their initial somatic fate when the reprogramming factors are withdrawn. This may be the result of persisting epigenetic memory at enhancers (Jadhav et al., 2019), which notably we find is not erased until the SP, as well as persistent expression of some fibroblast genes.
With our method employing longer periods of reprogramming, we observed robust and substantial rejuvenation of the whole transcriptome as well as aspects of the epigenome, with many features becoming approximately 30 years younger. This extent of rejuvenation appears to be substantially greater than what has been observed previously for transient reprogramming approaches that reprogram within the IP. The methylome appears to require longer reprogramming to substantially rejuvenate and consequently, previous work using shorter lengths of reprogramming resulted in modest amounts of rejuvenation of the methylome (Lu et al., 2020; Sarkar et al., 2020). However, we note that future studies are required to thoroughly compare these approaches with our method, ideally being performed in parallel on the same starting material and with the same reprogramming system, especially as different reprogramming systems can reprogram cells at different speeds (Schlaeger et al., 2015). Interestingly, these findings demonstrate that different parts of the epigenome undergo contrasting changes during transient reprogramming with age-associated CpG sites becoming differentially methylated during the MP and cell-identity regions remaining unchanged until the SP. The CpG sites within these two categories are distinct and the differential timing may suggest that different and potentially specific mechanisms are responsible for these changes. Telomere attrition is another aging hallmark, which can induce DNA damage and senescence (López-Otín et al., 2013). Consistent with previous studies (Marion et al., 2009), our reprogramming system did not induce telomere elongation until the SP, likely explaining why telomeres were not elongated by MPTR.
More recently, there have been in vivo transient reprogramming approaches that elicit similar magnitudes of rejuvenation to our in vitro MPTR method. In mice, 1 week of reprogramming induction followed by 2 weeks of recovery reversed age-associated expression changes (including collagen gene expression) and partially rejuvenated the DNA methylome in the pancreas (Chondronasiou et al., 2022). Interestingly, these outcomes closely mirror those observed in our human fibroblasts after MPTR. We note that iPSC reprogramming proceeds faster in mouse cells than in human cells (Teshigawara et al., 2017) and so this in vivo approach likely also reprograms up to the MP, supporting our findings that transient reprogramming up to the MP can substantially reverse multiple features of aging. In another recent approach, reprogramming was cyclically induced in mice for 2 days followed by 5 days of recovery for 7 months. This substantially reversed epigenetic clocks by up to 0.4 years (equivalent to 20 years in humans, similar to our system) (Browder, 2022). These results suggest that the rejuvenation from shorter periods of transient reprogramming is additive and when performed long term can reach the magnitude elicited by MPTR.
Quantifying the age of the transcriptome is challenging and our attempts to quantify transcriptional rejuvenation suggested varying magnitudes ranging from 20 to 40 years. In addition, we needed to apply batch correction to compare to reference aging data sets. There is a need in the field for a more robust transcription clock that can predict age accurately and can be applied to other data sets without the need to batch correct. Such a tool would be invaluable and enable us to quantify more accurately the true extent of transcriptional rejuvenation arising from MPTR.
Upon further interrogation of the transcriptomic rejuvenation, we also observed changes in genes with non-fibroblast functions. In particular, the age-associated downregulation of APBA2 and the age-associated upregulation of MAF were reversed (Figure 3D). APBA2 stabilizes amyloid precursor protein, which plays a key role in the development of Alzheimer’s disease (Araki, 2003). MAF regulates the development of embryonic lens fibre cells, and defects in this gene lead to the development of cataracts, which are a frequent complication in older age (Ring et al., 2000). These observations may signal the potential of MPTR to promote more general rejuvenation signatures that could be relevant for other cell types such as neurons. It will be interesting to determine if MPTR-induced rejuvenation is possible in other cell types, which could help us understand and potentially treat age-related diseases such as Alzheimer’s disease and cataracts. Potentially we may be able to rejuvenate ex vivo clinically relevant cell types and administer these rejuvenated cells as an autologous cell therapy, for example, fibroblasts rejuvenated by MPTR may be applicable for treating skin wounds and improving wound healing. In addition, we may be able to use MPTR as a screening platform to find novel candidate genes that are responsible for reversing age-associated changes during reprogramming. Potentially by targeting such genes, we may be able to reverse age-associated changes without inducing pluripotency.
In our study, we investigated different lengths of reprogramming for our MPTR method and surprisingly found that longer lengths of reprogramming did not always promote more rejuvenation in the transcriptome and epigenome. Instead, we found that 13 days of reprogramming was the optimal period and that longer lengths of reprogramming diminished the extent of transcriptional and epigenetic rejuvenation. This finding contrasts with the observations of cells undergoing complete iPSC reprogramming and highlights the importance of assessing multiple reprogramming durations when using transient reprogramming approaches.
The Yamanaka factors possess oncogenic properties, which can lead to teratoma formation when persistently overexpressed in vivo (Abad et al., 2013; Ohnishi et al., 2014). Our approach should avoid these properties as we only temporarily express the factors, similar to other transient reprogramming approaches (Ocampo et al., 2016; Sarkar et al., 2020). Whilst we could not find any signatures of pluripotency within the transcriptomes or methylomes of transiently reprogrammed cells, we cannot discount the possibility that a minor subset of cells within the population maintain pluripotent-like characteristics, and could therefore induce teratoma formation if transplanted in vivo. We note though that this is a proof-of-concept study and that the method will eventually require modifications to be more suitable for therapeutic applications, such as by replacing the lentiviral vectors with non-integrating vectors.
The effect of starting age is a factor that remains to be explored. In our study, we examined the effects of MPTR on fibroblasts from middle-aged donors and observed an approximately 30-year rejuvenation. It will be interesting to perform our method on fibroblasts from younger and older donors to see if the rejuvenating effect of MPTR is constant. In that case, cells would always become 30 years younger than their controls. Alternatively, the effect of MPTR may scale with starting age, with more rejuvenation being observed in cells from older donors compared to cells from younger donors. Finally, we note that multiple cycles of transient reprogramming can be performed with some approaches (Ocampo et al., 2016). It will be interesting to examine if MPTR can be performed repeatedly on cells and if this may improve the extent of rejuvenation. However, this may not be possible with our current system as telomere length is unaffected by MPTR. In addition, multiple cycles may not improve the extent of rejuvenation as there may be a minimum age that can be achieved when limiting reprogramming to the MP.
Overall, our results demonstrate that substantial rejuvenation is possible without acquiring stable pluripotency and suggest the exciting concept that the rejuvenation program may be separable from the pluripotency program. Future studies are warranted to determine the extent to which these two programs can be separated and could lead to the discovery of novel targets that promote rejuvenation without the need for iPSC reprogramming.
Materials and methods
Plasmids and lentivirus production
The doxycycline-inducible polycistronic reprogramming vector was generated by cloning a GFP-IRES sequence downstream of the tetracycline response element in the backbone FUW-tetO-hOKMS (Addgene 51543, a gift from Cacchiarelli et al., 2015). This vector was used in combination with FUW-M2rtTA (Addgene 20342, a gift from Hockemeyer et al., 2008). Viral particles were generated by transfecting HEK293T cells with the packaging plasmids pMD2.G (Addgene 12259, a gift from Didier Trono) and psPAX2 (Addgene 12260, a gift from Didier Trono) and either FUW-tetO-GFP-hOKMS or FUW-M2rtTA.
iPSC reprogramming
Dermal fibroblasts from middle-aged donors (38–53 years old) were purchased from Lonza and Gibco and were used at passage 4 after purchase for reprogramming experiments. Cells were routinely tested for mycoplasma. For lentiviral iPSC reprogramming, fibroblasts were expanded in fibroblast medium (Dulbecco’s modified Eagle’s medium [DMEM]-F12, 10% fetal bovine serum [FBS], 1× Glutamax, 1× MEM-NEAA, 1× beta-mercaptoethanol, 0.2× penicillin/streptomycin, and 16 ng/ml FGF2) before being spinfected with tetO-GFP-hOKMS and M2rtTA lentiviruses, where 10% virus supernatant and 8 µg/ml polybrene was added to the cells before centrifugation at 1000 rpm for 60 min at 32°C. Reprogramming was initiated 24 hr after lentiviral transduction by introducing doxycycline (2 µg/ml) to the media. Media were changed daily throughout the experiment subsequently. On day 2 of reprogramming, cells were flow sorted for viable GFP positive cells and then cultured on gelatine coated plates. On day 7 of reprogramming, cells were replated onto irradiated mouse embryonic fibroblasts (iMEFs) and on day 8 of reprogramming, the medium was switched to hES medium (DMEM-F12, 20% KSR, 1× Glutamax, 1× MEM-NEAA, 1× beta-mercaptoethanol, 0.2× penicillin/streptomycin, and 8 ng/ml FGF2). For transient reprogramming, cells were flow sorted at days 10, 13, 15, or 17 of reprogramming for the CD13+ SSEA4− and CD13− SSEA4+ populations. These cells were then replated on iMEFs (to replicate culture conditions before the flow sort and aid in cell reattachment) in fibroblast medium without doxycycline and then maintained like fibroblasts without iMEFs for subsequent passages. Cells were grown without doxycycline for 4 weeks in the first experiment and 5 weeks in the second experiment. Cells had returned to fibroblast morphology by 4 weeks in the second experiment, however, needed to be further expanded to generate enough material for downstream analyses. Negative control cells underwent the same procedure as the transient reprogramming cells to account for the effects of growing cells on iMEFs in hES media, flow sorting cells and keeping cells in culture for extensive periods of time. These confounders appeared to have no major effects on fibroblasts as these cells still clustered with the starting fibroblasts in our principal component analyses (Figure 1—figure supplement 1D and E). For complete reprogramming, colonies were picked on day 30 of reprogramming and transferred onto Vitronectin coated plates in E8 medium without doxycycline. Colonies were maintained as previously described (Milagre et al., 2017) and harvested at day 51 of reprogramming to ensure that the SP was completed and that traces of donor memory were erased. For Sendai virus iPSC reprogramming using CytoTune-iPS 2.0 Sendai Reprogramming Kit (Invitrogen), fibroblasts were reprogrammed as previously described (Milagre et al., 2017). For intermediate time points, cells were flow sorted into reprogramming (CD13− SSEA4+) and not reprogramming populations (CD13+ SSEA4−) before downstream profiling.
Fluorescence-activated cell sorting of reprogramming intermediates
Cells were pre-treated with 10 µM Y-27632 (Stemcell Technologies) for 1 hr. Cells were harvested using StemPro Accutase cell dissociation reagent and incubated with antibodies against CD13 (PE, 301704, BioLegend), SSEA4 (AF647, 330408, BioLegend), and CD90.2 (APC-Cy7, 105328, BioLegend) for 30 min. Cells were washed two times with 2% FBS in phosphate-buffered saline (PBS) and passed through a 50-µm filter to achieve a single cell suspension. Cells were stained with 1 µg/ml DAPI just prior to sorting. Single color controls were used to perform compensation and gates were set based on the ‘negative control intermediate’ samples. Cells were sorted with a BD FACSAria Fusion flow cytometer (BD Biosciences) and collected for either further culture or DNA/RNA extraction.
DNA methylation array
Genomic DNA was extracted from cell samples with the DNeasy Blood and Tissue Kit (QIAGEN) by following the manufacturer’s instructions and including the optional RNase digestion step. For intermediate reprogramming stage samples, genomic DNA was extracted alongside the RNA with the AllPrep DNA/RNA Mini Kit (QIAGEN). Genomic DNA samples were processed further at the Barts and the London Genome Centre and run on Infinium MethylationEPIC arrays (Illumina).
RNA-seq
RNA was extracted from cell samples with the RNeasy Mini Kit (QIAGEN) by following the manufacturer’s instructions. For intermediate reprogramming stage samples and Sendai virus reprogrammed samples, RNA was extracted alongside the genomic DNA with the AllPrep DNA/RNA Mini Kit (QIAGEN). RNA samples were DNase treated (Thermo Fisher Scientific) to remove contaminating DNA. RNA-seq libraries were prepared at the Wellcome Sanger Institute and run on a HiSeq 2500 system (Illumina) for 50-bp single-end sequencing. For Sendai virus reprogrammed samples, libraries were prepared as previously described (Milagre et al., 2017), and run on a HiSeq 2500 (Illumina) for 75-bp paired-end sequencing.
RNA-seq analysis
Reads were trimmed with Trim Galore (version 0.6.2) and aligned to the human genome (GRCh38) with Hisat2 (version 2.1.0). Raw counts and log2 transformed counts were generated with Seqmonk (version 1.45.4). Reference data sets for fibroblasts and iPSCs were obtained from Fleischer et al., 2018 (GEO: GSE113957) and Banovich et al., 2018 (GEO: GSE107654). In addition, the reference data sets included novel data examining the intermediate stages of dermal fibroblasts being reprogrammed with the CytoTune-iPS 2.0 Sendai Reprogramming Kit (Invitrogen). Samples were carried forward for further analysis if they had a total read count of at least 500,000 with at least 70% of the reads mapping to genes and at least 65% of the reads mapping to exons.
Immunofluorescence and imaging
Young control dermal fibroblasts were purchased from Lonza, Gibco, and the Coriell Institute (GM04505, GM04506, GM07525, GM07545, and AG09309) and were used at passage 4 after purchase. Antibody staining was performed as previously described (Santos et al., 2003) on cells grown on coverslips or cytospun onto coverslip after fixation with 2% paraformaldehyde for 30 min at room temperature. Briefly, cells were permeabilized with 0.5% TritonX-100 in PBS for 1 hr; blocked with 1% BSA in 0.05% Tween20 in PBS (BS) for 1 hr; incubated overnight at 4°C with the appropriate primary antibody diluted in BS; followed by wash in BS and secondary antibody. All secondary antibodies were Alexa Fluor conjugated (Molecular Probes) diluted 1:1000 in BS and incubated for 30 min. For the morphology analysis, cells were not permeabilized and were stained with direct labeled primary antibodies. Incubations were performed at room temperature, except where stated otherwise. DNA was counterstained with 5 μg/ml DAPI in PBS. Optical sections were captured with a Zeiss LSM780 microscope (63× oil-immersion objective). Fluorescence semi-quantification analysis was performed with Volocity 6.3 (Improvision). 3D rendering of z-stacks was used for semi-quantification of collagen I and IV. Single middle optical sections were used for semi-quantification of H3K9me3. Antibodies and dilutions used are listed below:
Wound healing assay
Cells were seeded into wound healing assay dish (80466, Ibidi) at a cell density of 20,000 cells per chamber. GM04505, GM04506, GM07545, and AG09309 fibroblasts were used at passage 5 as young controls. After 24 hr, the insert was removed generating 500 µm gaps between the cell-containing areas. The dishes were imaged every 20 min for 20 hr using a Nikon Ti-E equipped with a full enclosure incubation chamber (37°C; 5% CO2) and the 20× objective. The images were pre-processed by cropping and rotating so that the wound area was on the right-hand side of the image. A Fiji macro was used to generate masks of the wound healing images. The coverage of the wound by immerging cells was analyzed by measuring the intensity of the mask along a line across the image. R was used to determine the location of the wound edge by collecting all of the x coordinates where the mask intensity was high enough to indicate that it was no longer part of the wound. The wound edge at each time point was expressed relative to the starting position to obtain the distance closed. Migration speed was calculated from the gradient between distance closed and time.
Data analyses
Downstream analyses of RNA-seq and DNA methylation data were performed using R (version 4.0.2). Ggplot2 (version 3.3.2) was used to generate the bar charts, boxplots, line plots, pie charts, scatter plots, and violin plots. ComplexHeatmap (version 2.4.3) was used to generate the heatmaps. The combat function from the package sva (version 3.36.0) was used in Figure 1E and F to batch correct the novel Sendai reprogramming data set to the other data sets. The combat function was also used in Figure 3 to batch correct the fibroblast aging reference data set (Fleischer et al., 2018) to our data set. Nonparametric tests were used when the data distribution was not normal and parametric tests were used when the data distribution was normal.
The random forest-based transcription clock was trained on the batch corrected aging reference data set using the caret R package (Kuhn, 2008) and random forest regression with tenfold cross validation, three repeats and a ‘tuneLength’ of 5. Chronological age was transformed before training with the following formulas adapted from the Horvath multi-tissue epigenetic clock (Horvath, 2013):
F(age)=log2(chronological.age+1)−log2(adult.age+1) if chronological.age≤adult.age
F(age)=(chronological.age−adult.age)/(adult.age+1) if chronological.age>adult.age
As with the Horvath multi-tissue epigenetic clock, adult.age was set to 20 years old for these calculations (Horvath, 2013). The BiT age clock was also retrained on the batch corrected aging reference data set using scikit-learn as previously described (Meyer and Schumacher, 2021). This retrained model had a median absolute error of 5.55 years and consisted of 29 genes.
Rejuvenated CpG sites were found by comparing the methylation difference due to the age (calculated with the Horvath et al., 2018 data set) to the methylation difference due to 13 days of transient reprogramming. CpG sites were classified as rejuvenated if they demonstrated a methylation difference of 10% over 40 years of aging that was reversed by transient reprogramming.
Data availability
DNA methylation array and RNA-seq data are available on Gene Expression Omnibus under the accession number: GSE165180.
Decision letter
Our editorial process produces two outputs: i) public reviews designed to be posted alongside the preprint for the benefit of readers; ii) feedback on the manuscript for the authors, including requests for revisions, shown below. We also include an acceptance summary that explains what the editors found interesting or important about the work.
Decision letter after peer review:
Thank you for submitting your article "Multi-omic rejuvenation of human cells by maturation phase transient reprogramming" for consideration by eLife. Your article has been reviewed by 3 peer reviewers, and the evaluation has been overseen by a Reviewing Editor and Matt Kaeberlein as the Senior Editor. The reviewers have opted to remain anonymous.
The reviewers have discussed their reviews with one another, and the Reviewing Editor has drafted this to help you prepare a revised submission.
Essential revisions:
1. It is discussed that fibroblast morphology is reversed. It would be good to quantify this morphological dynamics. For instance, whether cell size undergoes transition from mesenchymal to epithelial lineages and if any reversal is observed.
2. The data will strongly profit from one function test like wound healing or any other simple assay to analyze the function of the transiently reprogrammed fibroblast in comparison to the negative control for example.
3. Based on the standard protocols used for the culture of fibroblasts using culture medium containing fetal bovine serum (FBS), it is possible that the recovery of cellular identity following reprogramming is mainly due to differentiation signals coming from factors present in the medium. For these reasons, knockout serum (KSR) is used at later stages (day 8) of reprogramming to allow generation of iPSCs. The authors should rule out the possibility that recovery of fibroblast identity is due to the culture of reprogramed fibroblasts in FBS containing medium. For this purpose, the authors should test whether the fibroblast identity can be recovered following withdrawal by culturing the cell in KSR or 1% FBS containing medium instead of 10% FBS. This is a very important concept for the message that the manuscript tries to communicate regarding an epigenetic memory responsible for the recovery of fibroblast identity.
Reviewer #2 (Recommendations for the authors):
Concerns/Comments
I do not get the title (it might be the multi-omic…). Rejuvenation is achieved by Yamanaka, but not by multi-omics. So, the current title does not work.
There needs to be more information provided on the fibroblast. Passage numbers, expansions etc. That is only briefly mentioned in the MaM section. Passaging might influence aging….
The PC analysis of the samples is somewhat difficult to understand and actually not very informative nor fully convincing. What would happen simply without the reference data set in the PC analysis? This should be shown.
The intermediate cells fall indeed together with the reference cells doing almost exactly the same, 10-20 days of reprogramming. In that case, it might have been nice to have a fully reprogrammed set off these sample probes (expression of 40 days of the factors), and not simply a reference set.
Within the expression PC analysis, day 10-17 fall closely together, but not so in the CpG analysis. That might need to be commented on.
Is there are real difference in Figure 1d between the transient reprogramming intermediate and the failed to transiently reprogram intermediate? That might need to be the major focus of these analyses.
This reviewer does not appreciate picking out single, individual genes like in Figure 1e or g, as the overall global changes count, not single genes. Picking on single genes might be a bit misleading for the reader, especially as it is not clear whether these genes have a central function in the whole process.
It would further strengthen the manuscript if there was more information on the limitation of the transient procedure? At which length of reprogramming will we see additional negative effects on the overall procedure? Will they still return to be fibroblast after transient reprogramming for longer periods etc? That is not really addressed.
The overall question that Figure 2 is trying to address is indeed interesting.
The type of analyses provided in Figure 2 though remain very superficial, so that at the end the question is what additional novel and informative data does Figure 2 provide other than that the transient intermediate is not a fibroblast anymore, and that after stopping of Yamanaka, they return to become fibroblasts again, which though is already part of Figure 1. Again, out of a large number of genes, only 2 are picked that share a distinct pattern, but it is not listed how many other genes might share this pattern, and whether they might then also contribute to the phenotype, nor is there an attempt to validate the function of one or the other gene in the fibroblasts.
While the data in Figure 3 is really strong, there is concern about the conclusions of data from Figure 3c and Supplementary Figure 3c with respect to the optimum days of Yamanaka exposure for rejuvenation, as there is not analysis on differences among the transiently reprogrammed samples between day 10 and 17. That needs to be included to validate that statement in lines 350 and 360,361 of the manuscript (see also my comments above on limitations of the procedure).
The data will strongly profit from one function test like wound healing or any other simple assay to analyze the function of the transiently reprogrammed fibroblast in comparison to the negative control for example.
Data presented in Figure 4 is a bit redundant and for example epi-clock data is already part of Figure 1 and the new epic-clock data might be simply already included in Figure 1. For the H3K9me3 data, what is the statistics between failed to reprogram and reprogrammed? That is missing and interesting to know. The focus on the overlap of gene expression and epigenetics is highly interesting, and these analyses could be easily more expanded on, or some more information and context provided, as these genes might now be indeed more important.
Discussion lines 544 to 551. I am not sure whether that the data allows to compare directly extent of rejuvenation to other approaches, as distinct analyses have been done in these publications, and direct functional comparisons have not been done/performed. While obviously there is a great level of rejuvenation within the approach the authors introduced, whether that is substantially greater than xy might require more detail comparisons on multiple levels.
The translational aspects listed in line 571 to 574 is somewhat vague and need to be either described in more detail or simply omitted.
Author response
1. It is discussed that fibroblast morphology is reversed. It would be good to quantify this morphological dynamics. For instance, whether cell size undergoes transition from mesenchymal to epithelial lineages and if any reversal is observed.
This is an interesting point. To address this, we have quantified the morphological changes using confocal microscopy and measuring a ratio of roundness (the maximum length divided by the perpendicular width) of individual cells before, during and after maturation phase transient reprogramming (MPTR). Cells became temporarily rounder during MPTR (lower ratio) and then returned to an elongated state (higher ratio) which matched that of the starting fibroblasts (Figure 1D).
2. The data will strongly profit from one function test like wound healing or any other simple assay to analyze the function of the transiently reprogrammed fibroblast in comparison to the negative control for example.
We agree that a functional measure would be very informative and so we have performed an in vitro wound healing assay to measure the migration speed of transiently reprogrammed fibroblasts and compared them to negative control fibroblasts as well as young control fibroblasts (Figure 3G). Negative control fibroblasts from middle-aged donors moved more slowly than young control fibroblasts into the scratch wound and transient reprogramming partially restored migration speed, suggesting some functional rejuvenation.
3. Based on the standard protocols used for the culture of fibroblasts using culture medium containing fetal bovine serum (FBS), it is possible that the recovery of cellular identity following reprogramming is mainly due to differentiation signals coming from factors present in the medium. For these reasons, knockout serum (KSR) is used at later stages (day 8) of reprogramming to allow generation of iPSCs. The authors should rule out the possibility that recovery of fibroblast identity is due to the culture of reprogramed fibroblasts in FBS containing medium. For this purpose, the authors should test whether the fibroblast identity can be recovered following withdrawal by culturing the cell in KSR or 1% FBS containing medium instead of 10% FBS. This is a very important concept for the message that the manuscript tries to communicate regarding an epigenetic memory responsible for the recovery of fibroblast identity.
The effect of FBS in promoting the return to fibroblast identity is an interesting possibility. We planned to investigate this by growing cells in fibroblast medium containing 10% KSR instead of 10% FBS after withdrawal of doxycycline following 13 days of reprogramming, as suggested. However, we have found that human fibroblasts were unable to be cultured long-term in KSR containing media. In addition, human fibroblasts grew substantially slower in 1% FBS containing medium (the other condition suggested), which would prevent us from collecting sufficient material with our current protocol. In addition, the substantially reduced growth speed would be a confounding factor that would limit the utility of any conclusions drawn. So unfortunately, whilst it was a very good suggestion, in practise it proved not possible to do these experiments.
I do not get the title (it might be the multi-omic…). Rejuvenation is achieved by Yamanaka, but not by multi-omics. So, the current title does not work.
For our title, we aimed to highlight that the rejuvenation is present in multiple-omic layers, and so we described this as multi-omic rejuvenation. This could be rephrased to “Maturation phase transient reprogramming promotes multi-omic rejuvenation in human cells”.
There needs to be more information provided on the fibroblast. Passage numbers, expansions etc. That is only briefly mentioned in the MaM section. Passaging might influence aging….
We tried to use the lowest passage number available to reduce the effect of in vitro culture on epigenetic age. Cells were used at passage four after purchasing and this has been added to the methods section (line 549). The exact passage number at purchase is unfortunately not available from Thermo Fisher.
The PC analysis of the samples is somewhat difficult to understand and actually not very informative nor fully convincing. What would happen simply without the reference data set in the PC analysis? This should be shown.
For the principal component analysis, the same trends are observed when the reference datasets are excluded with PC1 demonstrating the extent of reprogramming. As with the current figures, transiently reprogrammed fibroblasts cluster with the starting fibroblast samples and negative controls. We have added these additional PCA plots to Figure 1—figure supplement 1 (Figure 1—figure supplement 1D and 1E).
The intermediate cells fall indeed together with the reference cells doing almost exactly the same, 10-20 days of reprogramming. In that case, it might have been nice to have a fully reprogrammed set off these sample probes (expression of 40 days of the factors), and not simply a reference set.
We profiled fully reprogrammed cells which clustered with the reference iPSCs. We have added these samples along with the starting fibroblasts to Figures 1E and 1G.
Within the expression PC analysis, day 10-17 fall closely together, but not so in the CpG analysis. That might need to be commented on.
This is an interesting observation, which suggests that changes in the DNA methylome occur more gradually whereas changes in the transcriptome occur in more discrete stages. This has been commented on in the Results section (lines 176-180).
Is there are real difference in Figure 1d between the transient reprogramming intermediate and the failed to transiently reprogram intermediate? That might need to be the major focus of these analyses.
The transient reprogramming intermediate and failing to transiently reprogram intermediate samples cluster distinctly in figure 1E as well as in plots without the reference samples, supporting the idea that these populations are distinct. In addition, Nanog is only expressed in transient reprogramming intermediate cells and not the failing to transiently reprogram intermediate cells (Figure 1F).
This reviewer does not appreciate picking out single, individual genes like in Figure 1e or g, as the overall global changes count, not single genes. Picking on single genes might be a bit misleading for the reader, especially as it is not clear whether these genes have a central function in the whole process.
We have tried to represent RNA-seq data in such a way that global patterns are clear (PCA analyses, scatter plots etc.). We believe that representative examples of well-known genes such as Nanog, in addition to global analyses, improve clarity of the paper.
It would further strengthen the manuscript if there was more information on the limitation of the transient procedure? At which length of reprogramming will we see additional negative effects on the overall procedure? Will they still return to be fibroblast after transient reprogramming for longer periods etc? That is not really addressed.
We investigated several timepoints within the maturation phase in our study to determine the optimal amount of reprogramming for maximum rejuvenation. With 17 days of MPTR, we already observe diminished rejuvenation according to transcription and epigenetic clocks. We hypothesise that after the maturation phase, transient reprograming will be more difficult as the endogenous pluripotency factors are activated and so the reprogramming factors can no longer be reverted by withdrawing doxycycline. This point is now discussed in lines 118-120.
The overall question that Figure 2 is trying to address is indeed interesting.
The type of analyses provided in Figure 2 though remain very superficial, so that at the end the question is what additional novel and informative data does Figure 2 provide other than that the transient intermediate is not a fibroblast anymore, and that after stopping of Yamanaka, they return to become fibroblasts again, which though is already part of Figure 1. Again, out of a large number of genes, only 2 are picked that share a distinct pattern, but it is not listed how many other genes might share this pattern, and whether they might then also contribute to the phenotype, nor is there an attempt to validate the function of one or the other gene in the fibroblasts.
Figure 2 highlights that some fibroblast genes remain expressed at high levels in transient reprogramming intermediate cells, whilst others are temporarily down-regulated but their enhancers remain lowly methylated. We have added the number of genes in each cluster to Figure 2—figure supplement 1B and provided the lists of gene names in supplementary file 3. The genes in figure 2E are examples from the clusters in Figure 2—figure supplement 1B.
While the data in Figure 3 is really strong, there is concern about the conclusions of data from Figure 3c and Supplementary Figure 3c with respect to the optimum days of Yamanaka exposure for rejuvenation, as there is not analysis on differences among the transiently reprogrammed samples between day 10 and 17. That needs to be included to validate that statement in lines 350 and 360,361 of the manuscript (see also my comments above on limitations of the procedure).
Transcriptional clocks suggest that transiently reprogrammed cells with 17 days of reprogramming are older than transiently reprogrammed cells with shorter lengths of reprogramming. This is also in line with the observations from epigenetic clocks (Figure 4B and Figure 4—figure supplement 1B).
The data will strongly profit from one function test like wound healing or any other simple assay to analyze the function of the transiently reprogrammed fibroblast in comparison to the negative control for example.
We agree that a functional measure would be very informative and so we have performed an in vitro wound healing assay to measure the migration speed of transiently reprogrammed fibroblasts and compared them to negative control fibroblasts as well as young control fibroblasts (Figure 3G). Negative control fibroblasts from middle-aged donors moved more slowly than young control fibroblasts into the scratch wound and transient reprogramming partially restored migration speed, suggesting some functional rejuvenation.
Data presented in Figure 4 is a bit redundant and for example epi-clock data is already part of Figure 1 and the new epic-clock data might be simply already included in Figure 1. For the H3K9me3 data, what is the statistics between failed to reprogram and reprogrammed? That is missing and interesting to know. The focus on the overlap of gene expression and epigenetics is highly interesting, and these analyses could be easily more expanded on, or some more information and context provided, as these genes might now be indeed more important.
The aim of figure 4 is to demonstrate the rejuvenation at the epigenome level following the process of maturation phase transient reprogramming (MPTR), including by epigenetic clock analyses. The epigenetic clock analysis in figure 1 is different in that it does not include cells that have undergone MPTR, rather epigenetic clocks are used across a reprogramming time course to define the timepoint when rejuvenation occurs.
For figure 4A, we have added statistics comparing failed to transiently reprogram and transiently reprogrammed cells.
We agree that the overlap between epigenetic and transcriptional rejuvenation is interesting, and we have expanded upon the roles of these genes in the Results section (lines 422-430).
Discussion lines 544 to 551. I am not sure whether that the data allows to compare directly extent of rejuvenation to other approaches, as distinct analyses have been done in these publications, and direct functional comparisons have not been done/performed. While obviously there is a great level of rejuvenation within the approach the authors introduced, whether that is substantially greater than xy might require more detail comparisons on multiple levels.
We appreciate that this is an important caveat. Indeed, we have re-analysed some public data generated using a different transient reprogramming method using our transcriptome clock (Figure 3—figure supplement 1b) and found that our method results in more substantial rejuvenation in comparison, but we agree that a more direct and thorough comparison using identical readouts will be necessary to confirm this. We have discussed these points in the Discussion section (lines 458-461).
The translational aspects listed in line 571 to 574 is somewhat vague and need to be either described in more detail or simply omitted.
We have increased the amount of detail provided for potential translational aspects by discussing the potential of MPTR to rejuvenate cells for autologous cell transplants and provided a potential example (for treating skin wounds). We have also elaborated on the potential for our method to form the basis of a screen (lines 490-496).
Wellcome Trust (215912/Z/19/Z)
Milky Way Research Foundation
Wellcome Investigator award (210754/Z/18/Z)
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
The authors would like to thank all members of the Reik lab for helpful discussions. The authors would like to thank the bioinformatics facility at the Babraham Institute for processing the sequencing data, and the flow cytometry facility at the Babraham Institute for cell sorting. The authors would also like to thank the sequencing facilities at the Sanger Institute and the Bart’s and the London Genome Centre for sequencing and methylation array services, respectively. This work was funded by the BBSRC. AP is supported by a Sir Henry Wellcome Fellowship (215912/Z/19/Z). WR is a consultant and shareholder of Cambridge Epigenetix. TS is CEO and shareholder of Chronomics.
Background: Smoking-associated DNA methylation levels identified through epigenome-wide association studies (EWAS) are generally ascribed to smoking-reactive mechanisms, but the contribution of a shared genetic predisposition to smoking and DNA methylation levels is typically not accounted for.
Methods: We exploited a strong within-family design, i.e., the discordant monozygotic twin design, to study reactiveness of DNA methylation in blood cells to smoking and reversibility of methylation patterns upon quitting smoking. Illumina HumanMethylation450 BeadChip data were available for 769 monozygotic twin pairs (mean age=36 years,range=18-78, 70% female), including pairs discordant or concordant for current or former smoking.
Results: In pairs discordant for current smoking, 13 differentially methylated CpGs were found between current smoking twins and their genetically identical co-twin who never smoked. Top sites include multiple CpGs in CACNA1D and GNG12, which encode subunits of a calcium voltage-gated channel and G protein, respectively. These proteins interact with the nicotinic acetylcholine receptor, suggesting that methylation levels at these CpGs might be reactive to nicotine exposure. All 13 CpGs have been previously associated with smoking in unrelated individuals and data from monozygotic pairs discordant for former smoking indicated that methylation patterns are to a large extent reversible upon smoking cessation. We further showed that differences in smoking level exposure for monozygotic twins who are both current smokers but differ in the number of cigarettes they smoke are reflected in their DNA methylation profiles.
Conclusions: In conclusion, by analysing data from monozygotic twins, we robustly demonstrate that DNA methylation level in human blood cells is reactive to cigarette smoking.
CD8+ tissue-resident memory T (CD8+ Trm) cells play key roles in many immune-inflammation-related diseases. However, their characteristics in the pathological process of oral lichen planus (OLP) remains unclear. Therefore, we investigated the function of CD8+ Trm cells in the process of OLP. By using single-cell RNA sequencing profiling and spatial transcriptomics, we revealed that CD8+ Trm cells were predominantly located in the lamina propria adjacent to the basement membrane and were significantly increased in patients with erosive oral lichen planus (EOLP) compared to those with non-erosive OLP (NEOLP). Furthermore, these cells displayed enhanced cytokine production, including IFN-γ, TNF-α, and IL17, in patients with EOLP. And our clinical cohort of 1-year follow-up was also supported the above results in RNA level and protein level. In conclusion, our study provided a novel molecular mechanism for triggering OLP erosion by CD8+ Trm cells to secrete multiple cytokines, and new insight into the pathological development of OLP.
Homologous recombination (HR), the high-fidelity mechanism for double-strand break (DSB) repair, relies on DNA end resection by nucleolytic degradation of the 5′-terminated ends. However, the role of long-range resection mediated by Exo1 and/or Sgs1-Dna2 in HR is not fully understood. Here, we show that Exo1 and Sgs1 are dispensable for recombination between closely linked repeats, but are required for interchromosomal repeat recombination in Saccharomyces cerevisiae. This context-specific requirement for long-range end resection is connected to its role in activating the DNA damage checkpoint. Consistent with this role, checkpoint mutants also show a defect specifically in interchromosomal recombination. Furthermore, artificial activation of the checkpoint partially restores interchromosomal recombination to exo1∆ sgs1∆ cells. However, cell cycle delay is insufficient to rescue the interchromosomal recombination defect of exo1∆ sgs1∆ cells, suggesting an additional role for the checkpoint. Given that the checkpoint is necessary for DNA damage-induced chromosome mobility, we propose that the importance of the checkpoint, and therefore long-range resection, in interchromosomal recombination is due to a need to increase chromosome mobility to facilitate pairing of distant sites. The need for long-range resection is circumvented when the DSB and its repair template are in close proximity.
eLife is a non-profit organisation inspired by research funders and led by scientists. Our mission is to help scientists accelerate discovery by operating a platform for research communication that encourages and recognises the most responsible behaviours in science.eLife Sciences Publications, Ltd is a limited liability non-profit non-stock corporation incorporated in the State of Delaware, USA, with company number 5030732, and is registered in the UK with company number FC030576 and branch number BR015634 at the address:
eLife Sciences Publications, Ltd
Westbrook Centre, Milton Road
Cambridge CB4 1YG
UK | Introduction
Aging is the gradual decline in cell and tissue function over time that occurs in almost all organisms, and is associated with a variety of molecular hallmarks such as telomere attrition, genetic instability, epigenetic and transcriptional alterations, and an accumulation of misfolded proteins (López-Otín et al., 2013). This leads to perturbed nutrient sensing, mitochondrial dysfunction, and increased incidence of cellular senescence, which impacts overall cell function and intercellular communication, promotes exhaustion of stem cell pools, and causes tissue dysfunction (López-Otín et al., 2013). The progression of some aging related changes, such as transcriptomic and epigenetic ones, can be measured highly accurately and as such they can be used to construct “aging clocks” that predict chronological age with high precision in humans (Hannum et al., 2013; Horvath, 2013; Peters et al., 2015; Fleischer et al., 2018) and in other mammals (Stubbs et al., 2017; Thompson et al., 2017; Thompson et al., 2018). Since transcriptomic and epigenetic changes are reversible at least in principle, this raises the intriguing question of whether molecular attributes of aging can be reversed and cells phenotypically rejuvenated (Rando and Chang, 2012; Manukyan and Singh, 2012).
Induced pluripotent stem cell (iPSC) reprogramming is the process by which almost any somatic cell can be converted into an embryonic stem cell-like state. Intriguingly, iPSC reprogramming reverses many age-associated changes, including telomere attrition and oxidative stress (Lapasset et al., 2011). Notably, the epigenetic clock is reset back to approximately 0, suggesting reprogramming can reverse aging associated epigenetic alterations (Horvath, 2013). | yes |
Gastroenterology | Can stomach ulcers be caused by stress? | yes_statement | "stomach" "ulcers" can be "caused" by "stress".. "stress" can lead to the development of "stomach" "ulcers". | https://health.clevelandclinic.org/can-stress-give-you-an-ulcer/ | Can Stress Cause Stomach Ulcers? – Cleveland Clinic | Overuse of over-the-counter pain relief medication known as NSAIDs, short for nonsteroidal anti-inflammatory drugs.
Now, that doesn’t mean stress is off the hook completely. While it might not be the main culprit behind stomach ulcers, it definitely qualifies as an accomplice. Gastroenterologist Christine Lee, MD, explains.
Does stress cause ulcers?
Research shows that there’s a relationship between stress and ulcers. But does stress actually cause ulcers? That’s where things get complicated.
From numerous studies, it’s pretty clear that stress often serves as a backdrop to stomach ulcers, explains Dr. Lee. People diagnosed with this stomach condition often report high levels of stress in their daily lives.
But people under stress tend to use more NSAIDs to address aches and pains that develop. Stressors also can prompt more alcohol and tobacco use, factors known to fuel and worsen ulcer development, Dr. Lee notes.
Stress-stoking surgeries and illness have been connected to the development of stomach ulcers, too. (Plus, let’s be honest: The burning feeling in your gut that comes from an ulcer can amp up perceived stress levels!)
“Basically, it’s a chicken-or-the-egg sort of argument,” says Dr. Lee. “There’s a lot of conflicting research and debate on the topic. Most, though, view stress as something that does not cause stomach ulcers on its own.”
So, what causes ulcers?
A lining in your stomach protects it against the caustic acids and enzymes inside of your gut. Ulcers develop when that lining breaks down and allows those internal juices to eat away at your stomach wall.
But what’s powerful enough to undermine that tough lining? Let’s look at the two main sources.
H. pylori infection
Between 50% and 75% of the world’s population has H. pylori bacteria in their belly. For most, it’s not a problem. Sometimes, though, this bacteria multiplies to the point where your immune system can’t keep it in check.
This bacteria overgrowth may eventually work around your stomach’s immune system and damage your stomach walls, leading to ulcers. About 40% of stomach ulcers are linked to H. pylori.
NSAIDs
Taking an over-the-counter (OTC) pain pill is often shrugged off as no big deal in today’s world. But here’s the thing: The medications are powerful, and sending too many pills into your belly can cause problems.
The medication can irritate your stomach lining and even block your body’s natural ability to repair the damage. About 50% of stomach ulcers are caused by NSAID overuse.
The medication can decrease production of a hormone called prostaglandin, which can decrease the thickness of your stomach lining or impair your body’s natural ability to repair stomach lining damage.
Symptoms of a stomach ulcer
Burning discomfort and indigestion are two classic signs of a stomach ulcer. It can be described as an intense sensation that sometimes accompanies gut pain. The discomfort typically grows when you have an empty stomach.
Other common symptoms include:
A bloated stomach.
Nausea or vomiting.
Does a stomach ulcer go away?
Common ulcers typically heal with medication designed to reduce stomach acid and put a protective coating over the ulcer. If an H. pylori infection is involved, antibiotics may be prescribed to kill the bacteria.
You’ll need to avoid irritating the ulcer, too, which means avoiding NSAIDs, alcohol and smoking during recovery. Limiting their use afterward could help you avoid future issues, as well.
Managing stress
While stress may not cause a stomach ulcer, it certainly doesn’t help it, says Dr. Lee. Learning how to better handle stressors in your life can help you be a healthier, happier and more productive person. | Overuse of over-the-counter pain relief medication known as NSAIDs, short for nonsteroidal anti-inflammatory drugs.
Now, that doesn’t mean stress is off the hook completely. While it might not be the main culprit behind stomach ulcers, it definitely qualifies as an accomplice. Gastroenterologist Christine Lee, MD, explains.
Does stress cause ulcers?
Research shows that there’s a relationship between stress and ulcers. But does stress actually cause ulcers? That’s where things get complicated.
From numerous studies, it’s pretty clear that stress often serves as a backdrop to stomach ulcers, explains Dr. Lee. People diagnosed with this stomach condition often report high levels of stress in their daily lives.
But people under stress tend to use more NSAIDs to address aches and pains that develop. Stressors also can prompt more alcohol and tobacco use, factors known to fuel and worsen ulcer development, Dr. Lee notes.
Stress-stoking surgeries and illness have been connected to the development of stomach ulcers, too. (Plus, let’s be honest: The burning feeling in your gut that comes from an ulcer can amp up perceived stress levels!)
“Basically, it’s a chicken-or-the-egg sort of argument,” says Dr. Lee. “There’s a lot of conflicting research and debate on the topic. Most, though, view stress as something that does not cause stomach ulcers on its own.”
So, what causes ulcers?
A lining in your stomach protects it against the caustic acids and enzymes inside of your gut. Ulcers develop when that lining breaks down and allows those internal juices to eat away at your stomach wall.
But what’s powerful enough to undermine that tough lining? Let’s look at the two main sources.
H. pylori infection
Between 50% and 75% of the world’s population has H. pylori bacteria in their belly. For most, it’s not a problem. | no |
Gastroenterology | Can stomach ulcers be caused by stress? | yes_statement | "stomach" "ulcers" can be "caused" by "stress".. "stress" can lead to the development of "stomach" "ulcers". | https://www.medicalnewstoday.com/articles/324990 | Stress ulcer: Symptoms and treatments | What to know about stress ulcers
A stress ulcer causes sores in the upper gastrointestinal tract. These sores damage the gastrointestinal lining. It can cause pain, a feeling of burning, and an increased risk of infection. The damage ranges from minor irritation to severe bleeding.
Ulcers are common among people under immense physical stress, such as those in intensive care units.
A stress ulcer is not the same as a peptic ulcer that is made worse by stress.
While both cause sores in the lining of the stomach and the intestines, a typical peptic ulcer — sometimes called a stomach ulcer — tends to emerge gradually, as drugs or infections weaken the gastrointestinal lining. Stress ulcers come on suddenly, usually as a result of physiological stress.
Rarely, very significant psychological stress can trigger a stress ulcer. For example, a 2018 case report details the treatment of an ulcer in a toddler. The ulcer appeared after she had refused to go to daycare for 1 month. The doctors speculate that stress probably caused the ulcer.
Certain health and lifestyle factors increase the risk of damage to the stomach and intestinal lining. These factors make it more likely that a person will develop an ulcer, including a stress-related ulcer:
H. pylori infection
use of nonsteroidal anti-inflammatory drugs, or NSAIDS, such as ibuprofen
In people facing serious injuries or health emergencies, a history of ulcers may also increase the risk of a stress ulcer.
Share on PinterestStress ulcers can cause pain that may improve or worsen when eating food.
Stress ulcers cause a continuum of symptoms.
Minor ulcers may cause no symptoms at all, while severe ulcers may cause intense pain and serious complications. Because people with stress ulcers are already sick, it can be difficult to distinguish ulcer symptoms from symptoms of another illness.
To diagnose an ulcer, a doctor needs to see the gastrointestinal tract. They may use an endoscope — a long, thin tube — to see the ulcer. Also, they may use blood, breath, or stool tests to check for H. pylori bacteria, which are a major risk factor for ulcers.
The right treatment depends on the severity of the ulcer and the symptoms it causes. Patients with serious bleeding may need a blood transfusion.
The primary goal of treatment is to reduce stomach acid and lower the risk of serious infections, bleeding, and shock.
Proton pump inhibitors (PPIs): This is a group of drugs that reduces stomach acid. People taking PPIs develop elevated gastrin levels, which can increase stomach acid if they stop taking the drug. It is therefore important to continue with treatment for as long as a doctor recommends.
Stress ulcers are very common in emergency and intensive care settings. Over 75 percent of people hospitalized with severe burns or head trauma develop stress ulcers within 72 hours of the injury. So, some hospitals give patients medications to prevent ulcers and routinely check for them.
Strategies for preventing stress ulcers are similar to those for treating the ulcers; PPIs and histamine blockers may reduce the risk of stress ulcers.
A Cleveland Clinic Journal of Medicine review cautions that there is no reason to give all hospitalized patients preventive treatment. Unnecessary preventive treatment increases costs and complications.
The American Society of Health System Pharmacists recommend preventive treatment for patients who meet any of the following criteria:
In the past, doctors told people with a history of ulcers to eat a bland diet. New research shows that this is not necessary. Spicy foods do not cause ulcers, though some people notice that their symptoms get worse after eating certain foods.
People at risk of developing stress ulcers often have serious health issues, such as infections, organ failure, or head injuries. A stress ulcer can cause serious inflammation and bleeding that complicates other conditions. This means that stress ulcers are more dangerous than traditional peptic ulcers.
Most people at risk of developing stress ulcers are already in the hospital. If a person has recently had a hospital stay and develops symptoms of an ulcer, they should contact a doctor right away.
Not all serious ulcers immediately cause serious symptoms, so it is important for a doctor to assess any ulcer symptoms that arise.
The outlook depends on several factors, including how severe the ulcer is and the patient’s overall health. When ulcers rapidly bleed, a person can experience life-threatening blood loss. This can make healing difficult.
With the right treatment, however, people can recover from both stress ulcers and the issues that cause them.
How we reviewed this article:
Medical News Today has strict sourcing guidelines and draws only from peer-reviewed studies, academic research institutions, and medical journals and associations. We avoid using tertiary references. We link primary sources — including studies, scientific references, and statistics — within each article and also list them in the resources section at the bottom of our articles. You can learn more about how we ensure our content is accurate and current by reading our editorial policy. | What to know about stress ulcers
A stress ulcer causes sores in the upper gastrointestinal tract. These sores damage the gastrointestinal lining. It can cause pain, a feeling of burning, and an increased risk of infection. The damage ranges from minor irritation to severe bleeding.
Ulcers are common among people under immense physical stress, such as those in intensive care units.
A stress ulcer is not the same as a peptic ulcer that is made worse by stress.
While both cause sores in the lining of the stomach and the intestines, a typical peptic ulcer — sometimes called a stomach ulcer — tends to emerge gradually, as drugs or infections weaken the gastrointestinal lining. Stress ulcers come on suddenly, usually as a result of physiological stress.
Rarely, very significant psychological stress can trigger a stress ulcer. For example, a 2018 case report details the treatment of an ulcer in a toddler. The ulcer appeared after she had refused to go to daycare for 1 month. The doctors speculate that stress probably caused the ulcer.
Certain health and lifestyle factors increase the risk of damage to the stomach and intestinal lining. These factors make it more likely that a person will develop an ulcer, including a stress-related ulcer:
H. pylori infection
use of nonsteroidal anti-inflammatory drugs, or NSAIDS, such as ibuprofen
In people facing serious injuries or health emergencies, a history of ulcers may also increase the risk of a stress ulcer.
Share on PinterestStress ulcers can cause pain that may improve or worsen when eating food.
Stress ulcers cause a continuum of symptoms.
Minor ulcers may cause no symptoms at all, while severe ulcers may cause intense pain and serious complications. Because people with stress ulcers are already sick, it can be difficult to distinguish ulcer symptoms from symptoms of another illness.
To diagnose an ulcer, a doctor needs to see the gastrointestinal tract. | yes |
Gastroenterology | Can stomach ulcers be caused by stress? | yes_statement | "stomach" "ulcers" can be "caused" by "stress".. "stress" can lead to the development of "stomach" "ulcers". | https://www.granitepeaksgi.com/myth-of-fact-stress-causes-ulcers/ | Myth or Fact, Stress Causes Ulcers... | Granite Peaks GI | Foryears, people have believed that stress caused ulcers. While stress does contribute to a number of gastrointestinal issues (i.e., Irritable Bowel Syndrome), it is not the cause of ulcers. There are two main causes of ulcers: (1) Medications, primarily non-steroidal anti-inflammatory drugs (NSAIDs), which includes both over-the-counter and prescription medications such as aspirin, ibuprofen, naproxen and others; and, (2) a chronic bacterial infection known as H. Pylori– which has been identified in 65-85 percent of those found to have stomach and duodenal ulcers. (Excessive alcohol use and smoking exacerbate and may promote the development of ulcers.)
Now that doctors know the two main causes of ulcers- NSAIDs and H. Pylori infection- they are able to detect them, treat them, and cure patients of their ulcer disease. Whereas in the past, a patient might have had to undergo surgery for their ulcer, now doctors can manipulate the medications or treat the H. Pylori with antibiotics. Surgery is a rare option.
H. Pylori is the most common infectious agent in the world and is especially prevalent in under-developed countries. Scientists are not sure how the H. Pylori infection is spread, but suspect it is contracted through food and water.
“There are different strains of H. Pylori,” explains Granite Peaks Gastroenterologist Kyle Barnett, MD. “You may get the bacterial infection when you are young, but it might not cause symptoms for many years. If the strain is non-aggressive, you may never even know you have the infection.” When it does present itself, it is important to treat the infection as it can lead to serious diseases. “When we see stomach cancer, this bacteria is often present,” confirms Dr. Barnett, who has been treating patients for more than 20 years.
Detecting the bacteria can be done through a variety of noninvasive tests. One of the easiest, quickest tests is the breath test method done during an office visit. A blood test identifies antibodies, signaling prior exposure to the bacteria- it doesn’t necessarily mean you are still infected. Like the blood test, a stool test can also show whether the bacteria is present.
Another method of detecting H. Pylori is to do a biopsy. “Generally, we do a biopsy if we’re performing an upper endoscopy on a patient who has exhibited ulcer symptoms,” explains Dr. Barnett. There are factors that can influence the sensitivity of all the tests (i.e., if the patient has been taking acid blockers or antibiotics).
“Providing your doctor with a detailed account of what you are taking and your symptoms will help determine what tests and steps should be taken next,” advises Dr. Barnett. He points out that it is common to see the bacteria in groups who have emigrated together or in families, since they have shared space, food, and similar habits. This means if your siblings or parents have tested positive for H. Pylori, you could carry it too.
While abdominal pain is one of the symptoms of ulcers (see sidebar), it could also be a result of a number of gastrointestinal issues, such as acid reflux, pancreatitis or gall bladder issues. Testing for H. Pylori will help determine if an ulcer may be involved in the patient’s discomfort. Immediate evaluation is necessary when gastrointestinal bleeding is the presenting symptom, such as passing black or bloody stools. When blood mixes with acid in the stomach, it turns black.
The good news about ulcers? They are very treatable. “Twenty years ago we knew very little about the role H. Pylori played in the development of ulcers. Oftentimes, ulcers were a chronic problem in people; they would require surgery, sometimes removing a portion of their stomach as their ulcer treatment,” recalls Dr. Barnett. “Now, it is a rare patient that requires surgery. We can treat them medically.”
If a patient comes in with symptoms of burning abdominal pain, nausea, vomiting, or any symptoms that suggest a more aggressive process, (i.e., bleeding, weight loss, trouble swallowing) or is elderly, they should be evaluated as soon as possible. Recognizing the symptoms and causes of ulcers can lead to earlier detection, specific non-surgical ulcer treatment, and hopefully prevention of complications of ulcers. | Foryears, people have believed that stress caused ulcers. While stress does contribute to a number of gastrointestinal issues (i.e., Irritable Bowel Syndrome), it is not the cause of ulcers. There are two main causes of ulcers: (1) Medications, primarily non-steroidal anti-inflammatory drugs (NSAIDs), which includes both over-the-counter and prescription medications such as aspirin, ibuprofen, naproxen and others; and, (2) a chronic bacterial infection known as H. Pylori– which has been identified in 65-85 percent of those found to have stomach and duodenal ulcers. (Excessive alcohol use and smoking exacerbate and may promote the development of ulcers.)
Now that doctors know the two main causes of ulcers- NSAIDs and H. Pylori infection- they are able to detect them, treat them, and cure patients of their ulcer disease. Whereas in the past, a patient might have had to undergo surgery for their ulcer, now doctors can manipulate the medications or treat the H. Pylori with antibiotics. Surgery is a rare option.
H. Pylori is the most common infectious agent in the world and is especially prevalent in under-developed countries. Scientists are not sure how the H. Pylori infection is spread, but suspect it is contracted through food and water.
“There are different strains of H. Pylori,” explains Granite Peaks Gastroenterologist Kyle Barnett, MD. “You may get the bacterial infection when you are young, but it might not cause symptoms for many years. If the strain is non-aggressive, you may never even know you have the infection.” When it does present itself, it is important to treat the infection as it can lead to serious diseases. “When we see stomach cancer, this bacteria is often present,” confirms Dr. Barnett, who has been treating patients for more than 20 years.
Detecting the bacteria can be done through a variety of noninvasive tests. One of the easiest, | no |
Gastroenterology | Can stomach ulcers be caused by stress? | yes_statement | "stomach" "ulcers" can be "caused" by "stress".. "stress" can lead to the development of "stomach" "ulcers". | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3341916/ | Life Event, Stress and Illness - PMC | Share
RESOURCES
As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with,
the contents by NLM or the National Institutes of Health.
Learn more:
PMC Disclaimer
|
PMC Copyright Notice
Abstract
The relationship between stress and illness is complex. The susceptibility to stress varies from person to person. Among the factors that influenced the susceptibility to stress are genetic vulnerability, coping style, type of personality and social support. Not all stress has negative effect. Studies have shown that short-term stress boosted the immune system, but chronic stress has a significant effect on the immune system that ultimately manifest an illness. It raises catecholamine and suppressor T cells levels, which suppress the immune system. This suppression, in turn raises the risk of viral infection. Stress also leads to the release of histamine, which can trigger severe broncho-constriction in asthmatics. Stress increases the risk for diabetes mellitus, especially in overweight individuals, since psychological stress alters insulin needs. Stress also alters the acid concentration in the stomach, which can lead to peptic ulcers, stress ulcers or ulcerative colitis. Chronic stress can also lead to plaque buildup in the arteries (atherosclerosis), especially if combined with a high-fat diet and sedentary living. The correlation between stressful life events and psychiatric illness is stronger than the correlation with medical or physical illness. The relationship of stress with psychiatric illness is strongest in neuroses, which is followed by depression and schizophrenia. There is no scientific evidence of a direct cause-and-effect relationship between the immune system changes and the development of cancer. However, recent studies found a link between stress, tumour development and suppression of natural killer (NK) cells, which is actively involved in preventing metastasis and destroying small metastases.
Introduction
Stress is defined as a process in which environmental demands strain an organism’s adaptive capacity resulting in both psychological demands as well as biological changes that could place at risk for illness (1). Things that cause us stress are called stressors. Stress affects everyone, young and old, rich and poor. Life is full of stress. Stress is an every fact of life that we must all deal with. It comes in all shapes and sizes; even our thoughts can cause us stress and make the human body more susceptible to illness. There are three theories or perspectives regarding stress; environmental stress, psychological (emotional) stress and biological stress (1). The environmental stress perspective emphasizes assessment of environmental situations or experiences that are objectively related to substantial adaptive demands. The psychological stress perspective emphasizes people’s subjective evaluations of their ability to cope with demands presented to them by certain situations and experiences. Finally, the biological stress perspective emphasizes the function of certain physiological systems in the body that are regulated by both psychologically and physically demanding conditions.
The relationship between stress and illness is complex. The susceptibility to stress varies from person to person. An event that causes an illness in a person may not cause illness in other person. Events must interact with a wide variety of background factors to manifest as an illness. Among the factors that influenced the susceptibility to stress are genetic vulnerability, coping style, type of personality and social support. When we are confronted with a problem, we assess the seriousness of the problem and determine whether or not we have the resources necessary to cope with problem. If we believe that the problem is serious and do not have the resources necessary to cope with the problem, we will perceive ourselves as being under stress (2). It is our way of reacting to the situations that makes a difference in our susceptibility to illness and our overall well-being.
Not all stress has negative effect. When the body tolerates stress and uses it to overcome lethargy or enhance performance, the stress is positive, healthy and challenging. Hans Selye (3), one of the pioneers of the modern study of stress, termed this eustress. Stress is positive when it forces us to adapt and thus to increase the strength of our adaptation mechanisms, warns us that we are not coping well and that a lifestyle change is warranted if we are to maintain optimal health. This action-enhancing stress gives the athlete the competitive edge and the public speaker the enthusiasm to project optimally. Stress is negative when it exceeds our ability to cope, fatigues body systems and causes behavioral or physical problems. This harmful stress is called distress. Distress produces overreaction, confusion, poor concentration and performance anxiety and usually results in sub par performance. Figure 1 illustrates this concept.
Eustress is the action-enhancing stress that give athletes the competitive edge
There is a growing concern about the increasing cost and prevalence of stress-related disorders; especially in relation to work place. “Worked to death, drop death, work until you drop” are highlighted “work-related death” in the 21st century. Countries renowned for their long working hours know this well enough; Japan and China each have a word for death by overwork – karoshi and guolaosi respectively. Both Japan and Korea recognize suicide as an official and compensatable work-related condition (4). The estimated prevalence of stress and stress-related conditions in the United Kingdom rose from 829 cases per 100,000 workers in 1990 to 1,700 per 100,000 in 2001/2002. In that year, 13.4 million lost working days were attributed to stress, anxiety or depression, with an estimate 265,000 new cases of stress. The latest HSE (Health and Safety Executive) analysis of self-reported illnesses rate revealed that stress, depression or anxiety affects 1.3% of the workforce (5). It is estimated that 80% to 90% of all industrial accidents are related to personal problem and employees’ inability to handle stress (6). The European Agency for Safety and Health at work reported that about 50% of job absenteeism is caused by stress (7).
The morbidity and mortality due to stress-related illness is alarming. Emotional stress is a major contributing factor to the six leading causes of death in the United States: cancer, coronary heart disease, accidental injuries, respiratory disorders, cirrhosis of the liver and suicide. According to statistics from Meridian Stress Management Consultancy in the U.K, almost 180,000 people in the U.K die each year from some form of stress-related illness (7). The Centre for Disease Control and Prevention of the United States estimates that stress account about 75% of all doctors visit (7). This involves an extremely wide span of physical complaints including, but not limited to headache, back pain, heart problems, upset stomach, stomach ulcer, sleep problems, tiredness and accidents. According to Occupational Health and Safety news and the National Council on compensation of insurance, up to 90% of all visits to primary care physicians are for stress-related complaints.
Stress and the immune system
Our immune system is another area which is susceptible to stress. Much of what we know about the relationship between the brain, the nervous system, and the immune response has come out of the field of psychoneuroimmunology (PNI). PNI was developed in 1964 by Dr. Robert Ader, the Director of the Division of Behavioral and Psychosocial Medicine at the University of Rochester. Psychoneuroimmunology is the study of the intricate interaction of consciousness (psycho), brain and central nervous system (neuro), and the body’s defence against external infection and aberrant cell division (immunology) (8). More specifically it is devoted to understanding the interactions between the immune system, central nervous system and endocrine system. Although a relatively new medical discipline, the philosophical roots of the connection between physical health, the brain and emotions can be traced to Aristotle.
Immune responses are regulated by antigen, antibody, cytokines and hormones. Lymphocytes are most responsible for orchestrating the functions of the immune system. The immune system has about 1 trillion lymphocytes. Lymphocytes that grow and mature in the thymus are called T cells; other lymphocytes are called B cells. B cells secrete antibodies, chemicals that match specific invaders called antigens (humoral immunity). T cells do not secrete antibodies but act as messengers and killers, locating and destroying invading antigens (cellular immunity). Some T cells, called helpers, help activate the production of other T and B cells. Other T cells, called suppressors, stop the production of antigens, calling off the attack. The number of T and B cells must be balanced for them to perform effectively. When the ratio of T to B cells is out of balance, the immune response is compromised and does not work effectively. Other key chemicals that are produced by the immune systems are macrophages, monocytes and granulocytes. These chemicals envelop, destroy and digest invading microorganisms and other antigens. Known generally as phagocytes, they team up with more than 20 types of proteins that make up the immune system’s complement system. This system is triggered by antibodies that lock onto antigens, which cause inflammatory reactions.
Cytokines are non-antibody messenger molecules from a variety of cells of the immune system. Cytokines stimulate cellular release of specific compounds involved in the inflammatory response. They are made by many cell populations, but the predominant producers are helper T cells (Th) and macrophages. Th1 and Th2 cytokines inhibit one another’s production and function: Th1 cells stimulate cellular immunity and suppress humoral immunity, while Th2 cytokines have opposite effect. Cytokines is a general name; other specific name includes lymphokines (cytokines produced by lymphocytes), chemokines (cytokines with chemotactic activities), interleukin (IL) (cytokines made by one leukocyte and acting on other leukocytes) and interferon (IFN) (cytokines release by virus-invaded cell that prompt surrounding cell to produce enzymes that interfere with viral replication).
Cytokines are produced de novo in response to an immune stimulus. They generally act over short distances and short time spans and at very low concentration. They act by binding to specific membrane receptors, which then signal the cell via second messenger, often tyrosine kinases, to alter its behaviour (gene expression). Responses to cytokines include increasing or decreasing expression of membrane proteins (including cytokines receptors), proliferation and secretion of effectors molecules. The largest group of cytokines stimulates immune cell proliferation and differentiation. Some common bacterial antigens activate complement and stimulate macrophages to express co-stimulatory molecules. Antigens stimulate adaptive immune responsiveness by activating lymphocytes, which in turn make antibody to activate complement and cytokines to increase antigen elimination and recruit additional leukocytes.
Several studies have shown that chronic stress exerts a general immunosuppressive effect that suppresses or withholds the body’s ability to initiate a prompt, efficient immune reaction (9,10). This has been attributed to the abundance of corticosteroids produced during chronic stress, which produces an imbalance in corticosteroid levels and weakens immunocompetence. This weakening of immune function is thought to be associated with general strain on the various body parts associated with the production and maintenance of the immune system. For example, atrophy of the thymus or shrinking of the thymus results in its inability to produce T cells or the hormones needed to stimulate them. This can lead to an imbalance and inefficiency of the entire immune response. This is consistence with the finding that as we get elder, we are prone to suffer from infection, cancer, hypersensitivity and autoimmunity.
In a meta-analysis of 293 independent studies reported in peer-reviewed scientific journal between 1960 and 2001 with some 18,941 taking part, it is confirmed that stress alters immunity (11). Short-term stress actually boosts the immune system as it readies itself to meet and overcome a challenge such as an adaptive response preparing for injury or infection; but long-term or chronic stress causes too much wear and tear, and the system will break down especially if the individual has little control over events. The analyses (11) revealed that the most chronic stressors which change people’s identities or social roles, are more beyond their control and seem endless–were associated with the most global expression of immunity; almost all measures of immune function dropped across the board. Duration of stress also plays a role. The longer the stress, the more the immune system shifted from potentially adaptive changes (such as those in fight-or-flight response) to potentially detrimental changes, at first in cellular immunity and then in broader immune function. They also found that the immune systems of people who are older or already sick are more prone to stress-related change.
The link between stress and illness
The critical factor associated with stress is its chronic effect over time. Chronic stressors include daily hassles, frustration of traffic jams, work overload, financial difficulties, marital arguments or family problems. There are, of course, many more things that can cause stress, but these are the stressors commonly encountered in daily life. The pent-up anger we hold inside ourselves toward any of these situations, or the guilt and resentment we hold toward others and ourselves, all produce the same effects on the hypothalamus. Instead of discharging this stress, however, we hold it inside where its effects become cumulative.
Research shows that almost every system in the body can be influenced by chronic stress. When chronic stress goes unreleased, it suppresses the body’s immune system and ultimately manifests as illness. One can only wonder what would happen to the body if it remained in the fight-or-flight response. Fortunately, under normal circumstances, three minutes after a threatening situation is over and the real or imagined danger is removed, the fight-or-flight response subsides and the body relaxes and returns to its normal status. During this time heart rate, blood pressure, breathing, muscle tension, digestion, metabolism and the immune system all return to normal. If stress persists after the initial fight-or-flight reaction, the body’s reaction enters a second stage. During this stage, the activity if the sympathetic nervous system declines and adrenaline secretion is lessened, but corticosteroid secretion continues at above normal levels. Finally, if stress continues and the body is unable to cope, there is likely to be breakdown of bodily resources.
Medical illnesses
In asthma, both external and internal factors are involved; it is the internal factor that is most affected by acute effects of psychological stressors. Family therapy is widely incorporated in the management of asthmatic children. The improvement is attributed to minimizing the interaction with parents that produced frequent stressful situation. Additionally, asthmatics exposed to a harmless substance that they thought they were allergic would elicit a severe attack (12). A study by Gauci et al. (13) demonstrated significant positive correlations between a few of Minnesota Multiphasic Personality Inventory (MMPI) distressed-related scales and skin reactivity in response to allergens. Collectively, these data provide evidence for a clear association between stress, immune dysfunction and clinical activity of atopic and asthmatic disease. For further reference, Liu et al. (14) provided excellent evidence that stress can enhance allergic inflammatory response.
Gastrointestinal diseases such as peptic ulcer (PU) and ulcerative colitis (UC) are known to be greatly influenced by stress. PU occurs twice as often in air traffic controllers as in civilian copilots, and occurs more frequently among air traffic controllers at high-stress centers (Chicago O’Hare, La Guardia, JFK and Los Angeles International Airport) than low-stress centers (airports in less-populated cities in Virginia, Ohio, Texas and Michigan). Although stress is a risk factor in PU, more than 20 other factors are thought to be associated as well: blood type, sex, HLA antigen type, alcoholic cirrhosis, hypertension, chronic obstructive pulmonary disease, cigarette smoking, and even consumption of coffee, carbonated beverage or milk during college (12). Certain stressful life events have been associated with the onset or symptom exacerbation in other common chronic disorders of the digestive system such as functional gastrointestinal disorders (FGD), inflammatory bowel disease (IBD) and gastro-esophageal reflux disease (GERD). Early life stress in the form of abuse also plays a major role in the susceptibility to develop FGD as well as IBD later in life (15).
Ulcers are caused by excessive stomach acid, and studies of patients with gastric fistulas have shown that anger and hostility increase stomach acidity, while depression and withdrawal decrease it. Other theory correlating the effects of stress on the development of ulcers linked to the mucous coating that lines the stomach. The theory states that, during chronic stress, noradrenaline secretion causes capillaries in the stomach lining to constrict. This in turn, results in shutting down of mucosal production, and the mucous protective barrier for the stomach wall is lost. Without the protective barrier, hydrochloric acid breaks down the tissue and can even reach blood vessels, resulting in a bleeding ulcer (16). However, it has recently been discovered that many cases of ulcers are caused by a bacterial called Helicobacter pylori (H. pylori) (17). Although the exact mechanism by which it causes ulcers is unknown, it is believed that H. pylori inflames the gastrointestinal lining, stimulates acid production or both.
Coronary Heart disease (CHD) has long been regarded as a classical psychosomatic illness in that its onset or course was influenced by a variety of psychosocial variables. Psychosocial aspects of CHD had been studied extensively and there is strong evidence that psychological stress is a significant risk factor for CHD and CHD mortality (18,19,20,21). Tennant (19) found a positive relationship between life stress and cardiac infarction and sudden death; while study by Rosengren et al. (20) reported that CHD mortality was increased two folds for men experiencing three or more antecedent life events. The INTERHEART study (21) revealed that people with myocardial infarction reported higher prevalence of four stress factors: stress at work and at home, financial stress and major life events in the past year.
Although the evidences supporting an association between type A behaviour (aggressive, competitive, work-oriented and urgent behaviour) and CHD were conflicting (22); some studies found that type A individuals generate more stressful life events and were more likely than others to interpret encountered life event in an emotionally adverse way (23, 24). If type A is a risk factor it may not operate by way of long-term physiological dysfunction (leading to atherogenesis), but by way of acute life events provoking severe strain on the heart. One of the components of Type A behaviour is hostility, which may be correlated with CHD risk. Some studies (25, 26) noted that clinical CHD events are predicted by hostility and this seems to independent of other risk factors. Hostility was also found to be related to atherosclerosis in some angiography studies (27,28). Other studies found suppression of anger was associated with CHD event (29) and atherosclerosis (27,28). In review of these findings, Tennant (30) concluded that the possibility emerges that hostility (or its suppression) may have some role in CHD, although the mechanism is unclear.
The three major risk factors commonly agreed to be associated with CHD are hyper cholesterolemia, hypertension and cigarette smoking. In attempt to determine the causes of increased levels of serum cholesterol; Friedman et al. (31) conducted one of the early investigations of the relationship between stress and serum cholesterol. They found that stress is one of the causes of increased levels of serum cholesterol. Other researchers who studied the medical students facing the stress of exam (32), and military pilot at the beginning of their training and examination period (33) verified the findings. Since blood pressure and serum cholesterol increases during stress, the relationship between stress and hypertension has long been suspected; emotional stress is generally regarded as a major factor in the etiology of hypertension (34). One of the early evidence of this relationship came from the massive study of 1,600 hospital patients by Dunbar (35). He found that certain personality traits were characteristic of hypertensive patients; for example they were easily upset by criticism or imperfection, possessed pent-up anger and lack self-confidence. Recognizing this relationship, educational programs for hypertensive patients have included stress management.
It appears that some people are hereditarily susceptible to rheumatoid arthritis (RA). Approximately half of the sufferers of this condition have a blood protein called the rheumatoid factors (RF), which is rare in non-arthritic people. Since RA involves the body turning on itself (an autoimmune response), it was hypothesized that a self-destructive personality may manifest itself through this disease (16). Although the evidence to support this hypothesis is not conclusive, several investigators have found personality differences between RA sufferers and others. Those affected with this disease have been found to be perfectionists and are self-sacrificing, masochistic, and self-conscious. Female patients were found to be nervous, moody and depressed, with a history of being rejected by their mothers and having strict fathers. It has been suggested that people with the RF who experience chronic stress become susceptible to RA (16). Their immunological system malfunctions and genetic predisposition to RA results in their developing of the condition.
Migraine headaches are the result of constriction and dilatation of the carotid arteries of one side of the head. The constriction phase, called the prodrome, is often associated with light or noise sensitivity, irritability and a flushing or pallor of the skin. When the dilatation of the arteries occurs, certain chemicals stimulate adjacent nerve endings, causing pain. Diet may precipitate migraine headaches for some people. However, predominant thought on the cause of migraine pertains to emotional stress and tension. Feeling of anxiety, nervousness, anger or repressed rages are associated with migraine. An attack may be aborted when the individual gives vent to underlying personality (8). A typical migraine sufferer is a perfectionist, ambitious, rigid, orderly, excessively competitive and unable to delegate responsibility.
There is also evidence that emotionally stressful experience is associated with endocrine disorder such as diabetes mellitus (36). Physical or psychological stressors can alter insulin needs; stressors may often be responsible for episodes of loss of control, especially in diabetic children. Type II diabetes is most often affected by stress, as it tends to occur in overweight adults and is a less severe form of diabetes (12). Additionally, children who had stressful life events stemming from actual or threatened losses within the family and occurring between the ages of 5 and 9 had a significantly higher risk of type I diabetes.
Acute stress can suppress the virus-specific antibody and T cell responses to hepatitis B vaccine (37). People who show poor responses to vaccines have higher rate of illness including influenza virus infection. There are several other studies which demonstrated a relationship between psychological stress and susceptibility to several cold viruses (38,39). This is not surprising, as stress does suppress the immune system; latent viruses then have an easier time resurging since the body cannot defend itself any more. Attempts to find an association between stress and disease progression in patients with acquired immunodeficiency syndrome (AIDS) have met with conflicting results (40). Analysis of the Multicentre AIDS Cohort Study failed to observe an association between depression and the decline of CD4+ T lymphocytes, disease progression or death (41), but others have found significant association between immunological parameters reflective of HIV progression and psychosocial factors, particularly denial and distress (42), and concealment of homosexual identity (43).
Psychiatric illness
A large body of research in the past four decades has provided evidence that recent life events contribute to the onset of psychiatric illness (44). The association between stressful life events and psychiatric illness is stronger than the association with physical or medical illness. Vincent and Roscenstock (45) found that prior to hospitalization, patients with psychiatric disorders had suffered more stressful event than those with physical disorders. Meanwhile, Andrew and Tennant (46) failed to find the association between stress and physical illness. Although the exact relationship between stress and psychiatric illness is not clear, the final pathway is biochemical. As with medical illness, the appropriate model is one of multifactorial causations. Most life event research indicates a limit of 6 months to consider a stress having significant effect on illness. After that, the effect of stress diminishes with time.
Recent life events held to have a major etiological role in neuroses, a formative role in the onset of neurotic depression (mixed depressive illness) and a precipitating role in schizophrenic episodes (47). In other words, the association of stress with psychiatric illness is the strongest in neuroses, followed by depression and schizophrenia is the least. The correlation between neuroses and schizophrenia with stress is clearer. The weak association between stressful life events and onset of psychotic illness, particularly schizophrenia had been demonstrated in a few studies (48,49,50), in contrast with strong association between stress and neuroses (51,52, 53,54). However the degree of relationship between depressive illness and neuroses in relation to stress is rather controversial. Neither Paykel (55) nor Brown et al. (56) found the relationship between life event stress and illness is greater for neurotic depression than unipolar (endogenous) depression.
Bebbington et al. (57) found that there is an excess of life events preceding the onset of all types of psychoses, particularly in the first 3 months. In the study of recent onset of schizophrenia, schizophreniform disorder and hypomania, Chung et al. (49) found that threatening life events were significantly related to the onset of schizophreniform psychosis but not schizophrenia. They also found that threatening events might precipitate hypomanic episodes. Other study (50) found that individuals with schizophrenia do not experience more stressful life events than normal controls, but they reported greater subjective stress. A study that investigated the relationship between recent life events and episodes of illness in schizophrenia found that, initial or early episodes of schizophrenia are more likely to be associated with recent life events than are later episodes (48).
On the other hand, bipolar disorders have received less study than unipolar. In bipolar disorder, the effect of life events is generally weaker than unipolar; however major life events may be important in first onset (58). Causative factors in bipolar disorders are multifactorial and complex, and genetic factor seems to influence life events exposure. Those with greater genetic loading, there were fewer stressful life events before the first episode and they had the earlier onset of the disease. A number of studies have shown that the onset of depression is often preceded by stressful life events (59,60). Stressful life events along with recent minor difficulties have also been identified as predictors of an episode of depression in a monozygotic female twin study. Kessler (61) who came with the same conclusion added that there is evidence that concomitant chronic stress enhances the effect of major life events on depression.
Cooper and Sylph (51) documented the role of life events in the causation of neurotic illness. They found that neurotic group reported 50% more stressful events than the control group. McKeon et al. (52) found that patients with obsessive-compulsive neuroses who have abnormal personality traits (obsessional, anxious and self-conscious) experienced significantly fewer life events than those without such traits. Zheng and Young (53) in comparing live event stress between neurotic patients and normal control found that neurotic patients had significantly higher level of stress and experienced more life event changes than the control group. Rajendran et al. (54) who compared the neurotic executives with healthy executives as a control group, found significant differences between normal and neurotic groups in terms of the frequency of the life events as well as the stress they experienced due to those life events.
Stress and cancer
The relationship between breast cancer and stress has received particular attention. Some studies have indicated an increased incidence of early death, including cancer death among people who have experienced the recent loss of a spouse or loved one. A few studies of women with breast cancer have shown significantly high rate of disease among those women who experienced traumatic life events and loses within several years before their diagnosis. However, most cancers have been developing for many years and are diagnosed only after they have been growing in the body for a long time. Thus, this fact argues against an association between the death of a loved one and the triggering of cancer. There is no scientific evidence of a direct cause-and-effect relationship between these immune systems changes and the development of cancer. It has not been shown that stress-induced changes in the immune system directly cause cancer. However, more research is needed to find if there is a relationship between psychological stress and the transformation of normal cells into cancerous cells. One area that is currently being studied is whether psychological interventions can reduce stress in the cancer patients, improve immune function and possibly even prolonged the survival.
Studies in animals, mostly rats, revealed the link between stress and progression of cancerous tumors. Chronic and acute stress, including surgery and social disruptions, appear to promote tumor growth. It is easy to do such research in animal, but it is harder with humans. Furthermore, the interactions of many systems that affect cancer, from the immune system to the endocrine system, along with environment factors that are impossible to control for, make sorting out the role of stress extremely difficult. In addition, researchers cannot expose people to tumour cells as they do with animals. A recent study (62) found that there was a link between stress, tumour development and a type of white blood cells called natural killer (NK) cells. Of all the immune systems cells, NK cells have shown the strongest links to fighting certain forms of the disease, specifically preventing metastasis and destroying small metastases. Although the result of this study is not definitive, it is indicates that stress acts by suppressing NK-cell activity. Other preliminary study showed the evidence of a weakened immune system in breast cancer patients who feel high level of stress compared to those experiencing less stress.
A new study shows stress and social supports are important influences in a man’s risk for developing prostate cancer. Researchers (63) at State University of New York at Stony Brook’s medical school found men with high level of stress and a lack of satisfying relationships with friends and family had higher levels of Prostate-Specific Antigen (PSA) in their blood, a marker for an increased risk of developing prostate cancer. Based on the results, the risk of having an abnormal PSA was three times higher for men with high levels of stress. Likewise, men who had felt they had low levels of support from friends and family were twice as likely to have an abnormal PSA. The findings raise the possibility that a man’s psychological state can have a direct impact on prostate disease. | (15).
Ulcers are caused by excessive stomach acid, and studies of patients with gastric fistulas have shown that anger and hostility increase stomach acidity, while depression and withdrawal decrease it. Other theory correlating the effects of stress on the development of ulcers linked to the mucous coating that lines the stomach. The theory states that, during chronic stress, noradrenaline secretion causes capillaries in the stomach lining to constrict. This in turn, results in shutting down of mucosal production, and the mucous protective barrier for the stomach wall is lost. Without the protective barrier, hydrochloric acid breaks down the tissue and can even reach blood vessels, resulting in a bleeding ulcer (16). However, it has recently been discovered that many cases of ulcers are caused by a bacterial called Helicobacter pylori (H. pylori) (17). Although the exact mechanism by which it causes ulcers is unknown, it is believed that H. pylori inflames the gastrointestinal lining, stimulates acid production or both.
Coronary Heart disease (CHD) has long been regarded as a classical psychosomatic illness in that its onset or course was influenced by a variety of psychosocial variables. Psychosocial aspects of CHD had been studied extensively and there is strong evidence that psychological stress is a significant risk factor for CHD and CHD mortality (18,19,20,21). Tennant (19) found a positive relationship between life stress and cardiac infarction and sudden death; while study by Rosengren et al. (20) reported that CHD mortality was increased two folds for men experiencing three or more antecedent life events. The INTERHEART study (21) revealed that people with myocardial infarction reported higher prevalence of four stress factors: stress at work and at home, financial stress and major life events in the past year.
| yes |
Gastroenterology | Can stomach ulcers be caused by stress? | yes_statement | "stomach" "ulcers" can be "caused" by "stress".. "stress" can lead to the development of "stomach" "ulcers". | https://www.apa.org/topics/stress/body | Stress effects on the body | American Psychological Association. (2023, March 8). Stress effects on the body. https://www.apa.org/topics/stress/body
Comment:
Our bodies are well equipped to handle stress in small doses, but when that stress becomes long-term or chronic, it can have serious effects on your body.
Musculoskeletal system
When the body is stressed, muscles tense up. Muscle tension is almost a reflex reaction to stress—the body’s way of guarding against injury and pain.
With sudden onset stress, the muscles tense up all at once, and then release their tension when the stress passes. Chronic stress causes the muscles in the body to be in a more or less constant state of guardedness. When muscles are taut and tense for long periods of time, this may trigger other reactions of the body and even promote stress-related disorders.
For example, both tension-type headache and migraine headache are associated with chronic muscle tension in the area of the shoulders, neck and head. Musculoskeletal pain in the low back and upper extremities has also been linked to stress, especially job stress.
Millions of individuals suffer from chronic painful conditions secondary to musculoskeletal disorders. Often, but not always, there may be an injury that sets off the chronic painful state. What determines whether or not an injured person goes on to suffer from chronic pain is how they respond to the injury. Individuals who are fearful of pain and re-injury, and who seek only a physical cause and cure for the injury, generally have a worse recovery than individuals who maintain a certain level of moderate, physician-supervised activity. Muscle tension, and eventually, muscle atrophy due to disuse of the body, all promote chronic, stress-related musculoskeletal conditions.
Relaxation techniques and other stress-relieving activities and therapies have been shown to effectively reduce muscle tension, decrease the incidence of certain stress-related disorders, such as headache, and increase a sense of well-being. For those who develop chronic pain conditions, stress-relieving activities have been shown to improve mood and daily function.
Respiratory system
The respiratory system supplies oxygen to cells and removes carbon dioxide waste from the body. Air comes in through the nose and goes through the larynx in the throat, down through the trachea, and into the lungs through the bronchi. The bronchioles then transfer oxygen to red blood cells for circulation.
Stress and strong emotions can present with respiratory symptoms, such as shortness of breath and rapid breathing, as the airway between the nose and the lungs constricts. For people without respiratory disease, this is generally not a problem as the body can manage the additional work to breathe comfortably, but psychological stressors can exacerbate breathing problems for people with pre-existing respiratory diseases such as asthma and chronic obstructive pulmonary disease (COPD; includes emphysema and chronic bronchitis).
Some studies show that an acute stress—such as the death of a loved one—can actually trigger asthma attacks. In addition, the rapid breathing—or hyperventilation—caused by stress can bring on a panic attack in someone prone to panic attacks.
Working with a psychologist to develop relaxation, breathing, and other cognitive behavioral strategies can help.
Cardiovascular system
The heart and blood vessels comprise the two elements of the cardiovascular system that work together in providing nourishment and oxygen to the organs of the body. The activity of these two elements is also coordinated in the body’s response to stress. Acute stress—stress that is momentary or short-term such as meeting deadlines, being stuck in traffic or suddenly slamming on the brakes to avoid an accident—causes an increase in heart rate and stronger contractions of the heart muscle, with the stress hormones—adrenaline, noradrenaline, and cortisol—acting as messengers for these effects.
In addition, the blood vessels that direct blood to the large muscles and the heart dilate, thereby increasing the amount of blood pumped to these parts of the body and elevating blood pressure. This is also known as the fight or flight response. Once the acute stress episode has passed, the body returns to its normal state.
Chronic stress, or a constant stress experienced over a prolonged period of time, can contribute to long-term problems for heart and blood vessels. The consistent and ongoing increase in heart rate, and the elevated levels of stress hormones and of blood pressure, can take a toll on the body. This long-term ongoing stress can increase the risk for hypertension, heart attack, or stroke.
Repeated acute stress and persistent chronic stress may also contribute to inflammation in the circulatory system, particularly in the coronary arteries, and this is one pathway that is thought to tie stress to heart attack. It also appears that how a person responds to stress can affect cholesterol levels.
The risk for heart disease associated with stress appears to differ for women, depending on whether the woman is premenopausal or postmenopausal. Levels of estrogen in premenopausal women appears to help blood vessels respond better during stress, thereby helping their bodies to better handle stress and protecting them against heart disease. Postmenopausal women lose this level of protection due to loss of estrogen, therefore putting them at greater risk for the effects of stress on heart disease.
Endocrine system
When someone perceives a situation to be challenging, threatening, or uncontrollable, the brain initiates a cascade of events involving the hypothalamic-pituitary-adrenal (HPA) axis, which is the primary driver of the endocrine stress response. This ultimately results in an increase in the production of steroid hormones called glucocorticoids, which include cortisol, often referred to as the “stress hormone”.
The HPA axis
During times of stress, the hypothalamus, a collection of nuclei that connects the brain and the endocrine system, signals the pituitary gland to produce a hormone, which in turn signals the adrenal glands, located above the kidneys, to increase the production of cortisol.
Cortisol increases the level of energy fuel available by mobilizing glucose and fatty acids from the liver. Cortisol is normally produced in varying levels throughout the day, typically increasing in concentration upon awakening and slowly declining throughout the day, providing a daily cycle of energy.
During a stressful event, an increase in cortisol can provide the energy required to deal with prolonged or extreme challenge.
Stress and health
Glucocorticoids, including cortisol, are important for regulating the immune system and reducing inflammation. While this is valuable during stressful or threatening situations where injury might result in increased immune system activation, chronic stress can result in impaired communication between the immune system and the HPA axis.
This impaired communication has been linked to the future development of numerous physical and mental health conditions, including chronic fatigue, metabolic disorders (e.g., diabetes, obesity), depression, and immune disorders.
Gastrointestinal system
The gut has hundreds of millions of neurons which can function fairly independently and are in constant communication with the brain—explaining the ability to feel “butterflies” in the stomach. Stress can affect this brain-gut communication, and may trigger pain, bloating, and other gut discomfort to be felt more easily. The gut is also inhabited by millions of bacteria which can influence its health and the brain’s health, which can impact the ability to think and affect emotions.
Stress is associated with changes in gut bacteria which in turn can influence mood. Thus, the gut’s nerves and bacteria strongly influence the brain and vice versa.
Early life stress can change the development of the nervous system as well as how the body reacts to stress. These changes can increase the risk for later gut diseases or dysfunctioning.
Esophagus
When stressed, individuals may eat much more or much less than usual. More or different foods, or an increase in the use of alcohol or tobacco, can result in heartburn or acid reflux. Stress or exhaustion can also increase the severity of regularly occurring heartburn pain. A rare case of spasms in the esophagus can be set off by intense stress and can be easily mistaken for a heart attack.
Stress also may make swallowing foods difficult or increase the amount of air that is swallowed, which increases burping, gassiness, and bloating.
Stomach
Stress may make pain, bloating, nausea, and other stomach discomfort felt more easily. Vomiting may occur if the stress is severe enough. Furthermore, stress may cause an unnecessary increase or decrease in appetite. Unhealthy diets may in turn deteriorate one’s mood.
Contrary to popular belief, stress does not increase acid production in the stomach, nor causes stomach ulcers. The latter are actually caused by a bacterial infection. When stressed, ulcers may be more bothersome.
Bowel
Stress can also make pain, bloating, or discomfort felt more easily in the bowels. It can affect how quickly food moves through the body, which can cause either diarrhea or constipation. Furthermore, stress can induce muscle spasms in the bowel, which can be painful.
Stress can affect digestion and what nutrients the intestines absorb. Gas production related to nutrient absorption may increase.
The intestines have a tight barrier to protect the body from (most) food related bacteria. Stress can make the intestinal barrier weaker and allow gut bacteria to enter the body. Although most of these bacteria are easily taken care of by the immune system and do not make us sick, the constant low need for inflammatory action can lead to chronic mild symptoms.
Stress especially affects people with chronic bowel disorders, such as inflammatory bowel disease or irritable bowel syndrome. This may be due to the gut nerves being more sensitive, changes in gut microbiota, changes in how quickly food moves through the gut, and/or changes in gut immune responses.
Nervous system
The nervous system has several divisions: the central division involving the brain and spinal cord and the peripheral division consisting of the autonomic and somatic nervous systems.
The autonomic nervous system has a direct role in physical response to stress and is divided into the sympathetic nervous system (SNS), and the parasympathetic nervous system (PNS). When the body is stressed, the SNS contributes to what is known as the “fight or flight” response. The body shifts its energy resources toward fighting off a life threat, or fleeing from an enemy.
The SNS signals the adrenal glands to release hormones called adrenalin (epinephrine) and cortisol. These hormones, together with direct actions of autonomic nerves, cause the heart to beat faster, respiration rate to increase, blood vessels in the arms and legs to dilate, digestive process to change and glucose levels (sugar energy) in the bloodstream to increase to deal with the emergency.
The SNS response is fairly sudden in order to prepare the body to respond to an emergency situation or acute stress—short term stressors. Once the crisis is over, the body usually returns to the pre-emergency, unstressed state. This recovery is facilitated by the PNS, which generally has opposing effects to the SNS. But PNS over-activity can also contribute to stress reactions, for example, by promoting bronchoconstriction (e.g., in asthma) or exaggerated vasodilation and compromised blood circulation.
Both the SNS and the PNS have powerful interactions with the immune system, which can also modulate stress reactions. The central nervous system is particularly important in triggering stress responses, as it regulates the autonomic nervous system and plays a central role in interpreting contexts as potentially threatening.
Chronic stress, experiencing stressors over a prolonged period of time, can result in a long-term drain on the body. As the autonomic nervous system continues to trigger physical reactions, it causes a wear-and-tear on the body. It’s not so much what chronic stress does to the nervous system, but what continuous activation of the nervous system does to other bodily systems that become problematic.
Male reproductive system
The male reproductive system is influenced by the nervous system. The parasympathetic part of the nervous system causes relaxation whereas the sympathetic part causes arousal. In the male anatomy, the autonomic nervous system, also known as the fight or flight response, produces testosterone and activates the sympathetic nervous system which creates arousal.
Stress causes the body to release the hormone cortisol, which is produced by the adrenal glands. Cortisol is important to blood pressure regulation and the normal functioning of several body systems including cardiovascular, circulatory, and male reproduction. Excess amounts of cortisol can affect the normal biochemical functioning of the male reproductive system.
Sexual desire
Chronic stress, ongoing stress over an extended period of time, can affect testosterone production resulting in a decline in sex drive or libido, and can even cause erectile dysfunction or impotence.
Reproduction
Chronic stress can also negatively impact sperm production and maturation, causing difficulties in couples who are trying to conceive. Researchers have found that men who experienced two or more stressful life events in the past year had a lower percentage of sperm motility (ability to swim) and a lower percentage of sperm of normal morphology (size and shape), compared with men who did not experience any stressful life events.
Diseases of the reproductive system
When stress affects the immune system, the body can become vulnerable to infection. In the male anatomy, infections to the testes, prostate gland, and urethra, can affect normal male reproductive functioning.
Female reproductive system
Menstruation
Stress may affect menstruation among adolescent girls and women in several ways. For example, high levels of stress may be associated with absent or irregular menstrual cycles, more painful periods, and changes in the length of cycles.
Sexual desire
Women juggle personal, family, professional, financial, and a broad range of other demands across their life span. Stress, distraction, fatigue, etc., may reduce sexual desire—especially when women are simultaneously caring for young children or other ill family members, coping with chronic medical problems, feeling depressed, experiencing relationship difficulties or abuse, dealing with work problems, etc.
Pregnancy
Stress can have significant impact on a woman’s reproductive plans. Stress can negatively impact a woman’s ability to conceive, the health of her pregnancy, and her postpartum adjustment. Depression is the leading complication of pregnancy and postpartum adjustment.
Excess stress increases the likelihood of developing depression and anxiety during this time. Maternal stress can negatively impact fetal and ongoing childhood development and disrupt bonding with the baby in the weeks and months following delivery.
Premenstrual syndrome
Stress may make premenstrual symptoms worse or more difficult to cope with and premenses symptoms may be stressful for many women. These symptoms include cramping, fluid retention and bloating, negative mood (feeling irritable and “blue”) and mood swings.
Menopause
As menopause approaches, hormone levels fluctuate rapidly. These changes are associated with anxiety, mood swings, and feelings of distress. Thus menopause can be a stressor in and of itself. Some of the physical changes associated with menopause, especially hot flashes, can be difficult to cope with.
Furthermore, emotional distress may cause the physical symptoms to be worse. For example, women who are more anxious may experience an increased number of hot flashes and/or more severe or intense hot flashes.
Diseases of the reproductive system
When stress is high, there is increased chance of exacerbation of symptoms of reproductive disease states, such as herpes simplex virus or polycystic ovarian syndrome. The diagnosis and treatment of reproductive cancers can cause significant stress, which warrants additional attention and support.
Stress management
These recent discoveries about the effects of stress on health shouldn’t leave you worrying. We now understand much more about effective strategies for reducing stress responses. Such beneficial strategies include:
Maintaining a healthy social support network
Engaging in regular physical exercise
Getting an adequate amount of sleep each night
These approaches have important benefits for physical and mental health, and form critical building blocks for a healthy lifestyle. If you would like additional support or if you are experiencing extreme or chronic stress, a licensed psychologist can help you identify the challenges and stressors that affect your daily life and find ways to help you best cope for improving your overall physical and mental well-being. | Vomiting may occur if the stress is severe enough. Furthermore, stress may cause an unnecessary increase or decrease in appetite. Unhealthy diets may in turn deteriorate one’s mood.
Contrary to popular belief, stress does not increase acid production in the stomach, nor causes stomach ulcers. The latter are actually caused by a bacterial infection. When stressed, ulcers may be more bothersome.
Bowel
Stress can also make pain, bloating, or discomfort felt more easily in the bowels. It can affect how quickly food moves through the body, which can cause either diarrhea or constipation. Furthermore, stress can induce muscle spasms in the bowel, which can be painful.
Stress can affect digestion and what nutrients the intestines absorb. Gas production related to nutrient absorption may increase.
The intestines have a tight barrier to protect the body from (most) food related bacteria. Stress can make the intestinal barrier weaker and allow gut bacteria to enter the body. Although most of these bacteria are easily taken care of by the immune system and do not make us sick, the constant low need for inflammatory action can lead to chronic mild symptoms.
Stress especially affects people with chronic bowel disorders, such as inflammatory bowel disease or irritable bowel syndrome. This may be due to the gut nerves being more sensitive, changes in gut microbiota, changes in how quickly food moves through the gut, and/or changes in gut immune responses.
Nervous system
The nervous system has several divisions: the central division involving the brain and spinal cord and the peripheral division consisting of the autonomic and somatic nervous systems.
The autonomic nervous system has a direct role in physical response to stress and is divided into the sympathetic nervous system (SNS), and the parasympathetic nervous system (PNS). When the body is stressed, the SNS contributes to what is known as the “fight or flight” response. | no |
Gastroenterology | Can stomach ulcers be caused by stress? | yes_statement | "stomach" "ulcers" can be "caused" by "stress".. "stress" can lead to the development of "stomach" "ulcers". | https://health.usnews.com/conditions/articles/does-stress-cause-stomach-ulcers | Does Stress Cause Stomach Ulcers? | U.S. News | We know that too much stress can be bad for health. Chronic, high levels of stress can lead to a weakening of the immune system, which can open the door to other diseases. Constant stress also increases blood pressure, fatigue and mental health issues such as anxiety and depression. It’s even been linked to the development of heart disease.
(Getty Images)
Given how common stress is and that it can have such a negative effect on health overall, it’s no surprise that many people think that it can cause ulcers. But it’s a little more complicated than that.
What Are Ulcers?
“Peptic ulcers are defects or sores in the lining of the GI tract,” says Dr. Tara Menon, a gastroenterologist at The Ohio State University Wexner Medical Center in Columbus.
Ulcers can form in the stomach and small bowel, AKA the small intestine. “The symptoms of an ulcer may vary from person to person, and some individuals do not experience any symptoms at all,” Menon says. But among those who do have symptoms, common ones include:
What Causes Ulcers?
Dr. Robert Lerrigo, a gastroenterologist with Santa Clara Valley Medical Center in California, says there are “many different causes of stomach ulcers.” The most common among these are:
H. pylori infection. “Infection with the bacteriaHelicobacter pylori, can directly cause inflammation in the stomach and increase acid production,” Lerrigo says. Roughly 80% to 90% of stomach ulcers are caused by this bacterial infection.
NSAIDs. Frequent or excessive use of nonsteroidal, anti-inflammatory drugs such as aspirin, ibuprofen and naproxen can cause stomach ulcers because these over-the-counter medications “can impair the mucus lining of the stomach, leaving it susceptible to damage from stomach acid,” Lerrigo says.
Tumors and other diseases. Less common causes of ulcers include tumors that increase acid production in the stomach and stomach cancer, which can “erode into the stomach creating large ulcers,” Lerrigo says.
What About Stress?
You’ll notice that stress is not on the list of causes of stomach ulcers above. “Studies to date show that stress alone does not cause peptic ulcers,” Menon says. “However, we do know that if the body is under stress,” such as may occur when you’re severely ill, the body’s “ability to heal itself is impaired. As a result, one may be more prone to developing a peptic ulcer.”
Common stressors that may be associated with increased risk of developing or exacerbating stomach ulcers include smoking cigarettes or consuming excessive amounts of alcohol, Menon says. These habits “can impair the body’s ability to heal and ultimately cause peptic ulcers.” Being ill with another condition, such as an autoimmune disorder or some viral infections, can also set the stage for an ulcer to develop.
Even though stress is not a direct cause of ulcers, it’s still important to control it, as too much stress can have many negative impacts on overall health and well-being. Plus, Menon notes, “controlling stress may help to reduce some of the symptoms of ulcers, such as heartburn or reflux. Management of stress is good for our health overall.”
Managing Stress for Overall Health
Managing stress is an important element of modern life, and Menon recommends trying a variety of relaxation techniques and coping strategies to reduce stress.
Try meditation and mindfulness. Normal life is stressful enough, but particularly now during the coronavirus pandemic, meditation, mindfulness and other stress-busting practices are becoming more important than ever. These practices don’t have to be complicated or take a long time. Even just five minutes a day of mindful focus on your breathing can help reduce your stress level and possibly even lower your blood pressure.
Increase physical activity. “Routine exercise helps to release natural endorphins (hormones) that may reduce stress,” Menon says. You don’t have to do a ton – any modest increase can make a difference. “Always set reasonable goals for exercising.”
Get enough sleep. “Make sleep a priority. Getting a full night of sleep can be a very effective stress reducer,” Menon says.
Seek professional help. “It’s important to seek treatment from a professionally trained mental health provider when indicated,” Menon says. If you’re struggling with stress, reach out to your health care provider for advice and support.
Eliminate unhealthy habits. Smoking and consuming excessive amounts of alcohol add stress to the body. Though many people rely on these habits as a coping mechanism, they often make things worse. Do your best to kick or at least reduce these habits.
Improve your diet. As the saying goes, you are what you eat, and diet is a particularly important piece of the puzzle for those with peptic ulcers. Certain foods may trigger or exacerbate symptoms, so think about what you’re eating and how it might make you feel later. Menon recommends “avoiding foods that may trigger heartburn or reflux symptoms, such as tomato-based items, citrus-based items, spicy foods, fatty foods, caffeine or coffee. These items do not cause ulcers, but avoidance may alleviate some of the symptoms.”
Drop some weight. Losing weight isn’t easy, but even a small reduction in body weight can lead to an improvement of stomach ulcer symptoms. “Weight loss reduces pressure on the abdominal area, which in return can reduce symptoms of heartburn or reflux,” Menon says.
The U.S. News Health team delivers accurate information about health, nutrition and fitness, as well as in-depth medical condition guides. All of our stories rely on multiple, independent sources and experts in the field, such as medical doctors and licensed nutritionists. To learn more about how we keep our content accurate and trustworthy, read our editorial guidelines.
Robert Lerrigo, MD
Lerrigo is associate chief of gastroenterology and hepatology with Santa Clara Valley Medical Center in California.
Tara Menon, MD
Menon is a gastroenterologist at the Ohio State University Wexner Medical Center in Columbus. | Dr. Robert Lerrigo, a gastroenterologist with Santa Clara Valley Medical Center in California, says there are “many different causes of stomach ulcers.” The most common among these are:
H. pylori infection. “Infection with the bacteriaHelicobacter pylori, can directly cause inflammation in the stomach and increase acid production,” Lerrigo says. Roughly 80% to 90% of stomach ulcers are caused by this bacterial infection.
NSAIDs. Frequent or excessive use of nonsteroidal, anti-inflammatory drugs such as aspirin, ibuprofen and naproxen can cause stomach ulcers because these over-the-counter medications “can impair the mucus lining of the stomach, leaving it susceptible to damage from stomach acid,” Lerrigo says.
Tumors and other diseases. Less common causes of ulcers include tumors that increase acid production in the stomach and stomach cancer, which can “erode into the stomach creating large ulcers,” Lerrigo says.
What About Stress?
You’ll notice that stress is not on the list of causes of stomach ulcers above. “Studies to date show that stress alone does not cause peptic ulcers,” Menon says. “However, we do know that if the body is under stress,” such as may occur when you’re severely ill, the body’s “ability to heal itself is impaired. As a result, one may be more prone to developing a peptic ulcer.”
Common stressors that may be associated with increased risk of developing or exacerbating stomach ulcers include smoking cigarettes or consuming excessive amounts of alcohol, Menon says. These habits “can impair the body’s ability to heal and ultimately cause peptic ulcers.” Being ill with another condition, such as an autoimmune disorder or some viral infections, can also set the stage for an ulcer to develop.
| no |
Gastroenterology | Can stomach ulcers be caused by stress? | no_statement | "stomach" "ulcers" cannot be "caused" by "stress".. "stress" does not play a role in the formation of "stomach" "ulcers". | https://www.hopkinsmedicine.org/health/conditions-and-diseases/stomach-and-duodenal-ulcers-peptic-ulcers | Stomach and Duodenal Ulcers (Peptic Ulcers) | Johns Hopkins ... | Stomach and Duodenal Ulcers (Peptic Ulcers)
What is a peptic ulcer?
A peptic ulcer is a sore on the lining of your stomach or the first part of your small intestine (duodenum). If the ulcer is in your stomach, it is called a gastric ulcer. If the ulcer is in your duodenum, it is called a duodenal ulcer.
Ulcers are fairly common.
What causes peptic ulcers?
In the past, experts thought lifestyle factors such as stress and diet caused ulcers. Today we know that stomach acids and other digestive juices help create ulcers. These fluids burn the linings of your organs.
Causes of peptic ulcers include:
H. pylori bacteria (Helicobacter pylori). Most ulcers are caused by an infection from a bacteria or germ called H. pylori. This bacteria hurts the mucus that protects the lining of your stomach and the first part of your small intestine (the duodenum). Stomach acid then gets through to the lining.
NSAIDs (nonsteroidal anti-inflammatory drugs). These are over-the-counter pain and fever medicines such as aspirin, ibuprofen, and naproxen. Over time they can damage the mucus that protects the lining of your stomach.
What are the symptoms of peptic ulcers?
Each person’s symptoms may vary. In some cases ulcers don’t cause any symptoms.
The most common ulcer symptom is a dull or burning pain in your belly between your breastbone and your belly button (navel). This pain often occurs around meal times and may wake you up at night. It can last from a few minutes to a few hours.
Less common ulcer symptoms may include:
Feeling full after eating a small amount of food
Burping
Nausea
Vomiting
Not feeling hungry
Losing weight without trying
Bloody or black stool
Vomiting blood
Peptic ulcer symptoms may look like other health problems. Always see your healthcare provider to be sure.
How are peptic ulcers diagnosed?
Your healthcare provider will look at your past health and give you a physical exam. You may also have some tests.
Imaging tests used to diagnose ulcers include:
Upper GI (gastrointestinal) series or barium swallow. This test looks at the organs of the top part of your digestive system. It checks your food pipe (esophagus), stomach, and the first part of the small intestine (the duodenum). You will swallow a metallic fluid called barium. Barium coats the organs so that they can be seen on an X-ray.
Upper endoscopy or EGD (esophagogastroduodenoscopy). This test looks at the lining of your esophagus, stomach, and duodenum. It uses a thin lighted tube called an endoscope. The tube has a camera at one end. The tube is put into your mouth and throat. Then it goes into your esophagus, stomach, and duodenum. Your health care provider can see the inside of these organs. A small tissue sample (biopsy) can be taken. This can be checked for H. pylori.
You may also have the following lab tests to see if you have an H. pylori infection:
Blood tests. These check for infection-fighting cells (antibodies) that mean you have H. pylori.
Stool culture. A small sample of your stool is collected and sent to a lab. In 2 or 3 days, the test will show if you have H. pylori.
Urea breath test. This checks to see how much carbon dioxide is in your breath when you exhale. You will swallow a urea pill that has carbon molecules. If you have H. pylori, the urea will break down and become carbon dioxide. You will have a sample taken of your breath by breathing into a bag. It will be sent to a lab. If your sample shows higher than normal amounts of carbon dioxide, you have H. pylori.
How are peptic ulcers treated?
Treatment will depend on the type of ulcer you have. Your healthcare provider will create a care plan for you based on what is causing your ulcer.
Treatment can include making lifestyle changes, taking medicines, or in some cases having surgery.
Lifestyle changes may include:
Not eating certain foods. Avoid any foods that make your symptoms worse.
Quitting smoking. Smoking can keep your ulcer from healing. It is also linked to ulcers coming back after treatment.
Limiting alcohol and caffeine. They can make your symptoms worse.
Not using NSAIDs (non-steroidal anti-inflammatory medicines). These include aspirin and ibuprofen.
Medicines to treat ulcers may include:
Antibiotics. These bacteria-fighting medicines are used to kill the H. pylori bacteria. Often a mix of antibiotics and other medicines is used to cure the ulcer and get rid of the infection.
H2-blockers (histamine receptor blockers). These reduce the amount of acid your stomach makes by blocking the hormone histamine. Histamine helps to make acid.
Proton pump inhibitors or PPIs. These lower stomach acid levels and protect the lining of your stomach and duodenum.
Mucosal protective agents. These medicines protect the stomach's mucus lining from acid damage so that it can heal.
In most cases, medicines can heal ulcers quickly. Once the H. pylori bacteria is removed, most ulcers do not come back.
In rare cases, surgery may be needed if medicines don’t help. You may also need surgery if your ulcer causes other medical problems.
What are the complications of peptic ulcers?
Ulcers can cause serious problems if you don’t get treatment.
The most common problems include:
Bleeding. As an ulcer wears away the muscles of the stomach or duodenal wall, blood vessels may be hurt. This causes bleeding.
Hole (perforation). Sometimes an ulcer makes a hole in the wall of your stomach or duodenum. When this happens, bacteria and partly digested food can get in. This causes infection and redness or swelling (inflammation).
Narrowing and blockage (obstruction). Ulcers that are found where the duodenum joins the stomach can cause swelling and scarring. This can narrow or even block the opening to the duodenum. Food can’t leave your stomach and go into your small intestine. This causes vomiting. You can’t eat properly.
When should I call my healthcare provider?
See your healthcare provider right away if you have any of these symptoms:
Vomiting blood or dark material that looks like coffee grounds
Extreme weakness or dizziness
Blood in your stools (your stools may look black or like tar)
Nausea or vomiting that doesn’t get better, or gets worse
A sudden, severe pain that may spread to your back
Losing weight without even trying
Untreated peptic ulcers may cause other health problems. Sometimes they bleed. If they become too deep, they can break through your stomach.
Ulcers can also keep food from going through your stomach.
Key points
These ulcers are sores on the lining of your stomach or the first part of your small intestine (the duodenum).
Stomach acids and other digestive juices help to make ulcers by burning the linings of these organs.
Most ulcers are caused by infection from a bacteria or germ called H. pylori (Helicobacter pylori) or from using pain killers called NSAIDs.
The most common symptom is a dull or burning pain in the belly between the breastbone and the belly button.
Ulcers can be treated with a mix of lifestyle changes and medicines. In rare cases, surgery is needed.
Next steps
Tips to help you get the most from a visit to your healthcare provider:
Know the reason for your visit and what you want to happen.
Before your visit, write down questions you want answered.
Bring someone with you to help you ask questions and remember what your provider tells you.
At the visit, write down the name of a new diagnosis, and any new medicines, treatments, or tests. Also write down any new instructions your provider gives you.
Know why a new medicine or treatment is prescribed, and how it will help you. Also know what the side effects are.
Ask if your condition can be treated in other ways.
Know why a test or procedure is recommended and what the results could mean.
Know what to expect if you do not take the medicine or have the test or procedure.
If you have a follow-up appointment, write down the date, time, and purpose for that visit. | Stomach and Duodenal Ulcers (Peptic Ulcers)
What is a peptic ulcer?
A peptic ulcer is a sore on the lining of your stomach or the first part of your small intestine (duodenum). If the ulcer is in your stomach, it is called a gastric ulcer. If the ulcer is in your duodenum, it is called a duodenal ulcer.
Ulcers are fairly common.
What causes peptic ulcers?
In the past, experts thought lifestyle factors such as stress and diet caused ulcers. Today we know that stomach acids and other digestive juices help create ulcers. These fluids burn the linings of your organs.
Causes of peptic ulcers include:
H. pylori bacteria (Helicobacter pylori). Most ulcers are caused by an infection from a bacteria or germ called H. pylori. This bacteria hurts the mucus that protects the lining of your stomach and the first part of your small intestine (the duodenum). Stomach acid then gets through to the lining.
NSAIDs (nonsteroidal anti-inflammatory drugs). These are over-the-counter pain and fever medicines such as aspirin, ibuprofen, and naproxen. Over time they can damage the mucus that protects the lining of your stomach.
What are the symptoms of peptic ulcers?
Each person’s symptoms may vary. In some cases ulcers don’t cause any symptoms.
The most common ulcer symptom is a dull or burning pain in your belly between your breastbone and your belly button (navel). This pain often occurs around meal times and may wake you up at night. It can last from a few minutes to a few hours.
| no |
Psychobiology | Can sugar cause hyperactivity in children? | yes_statement | "sugar" can "cause" "hyperactivity" in "children".. "hyperactivity" in "children" can be "caused" by "sugar". | https://www.cnn.com/2019/04/18/health/sugar-hyper-myth-food-drayer/index.html | Does sugar make kids hyper? That's largely a myth | CNN | Does sugar make kids hyper? That’s largely a myth
We took the most popular food brands among Americans, in nine categories young kids love, and used the current US dietary guidelines to illustrate what the daily recommended amount of sugar for kids looks like. Our math: Each of these images represents 33 grams of sugar. The recommendation is that added sugar should equal less than 10% of one's daily caloric needs. The median calories for moderately active 4- to 8-year-olds is 1,500 calories. So we calculated 9% of 1,500 as 135 calories, which equals 33 grams of sugar per day. If your child consumes what is pictured, they will probably have maxed out their recommended sugar intake for the whole day.
Forrest Aguar & Michelle Norris for CNN
For a standard 12-ounce can of Coca-Cola, about four-fifths of the can equals 33 grams of sugar.
Forrest Aguar & Michelle Norris for CNN
For a standard 6-ounce container of Yoplait yogurt (strawberry), one plus four-fifths of another equals 33 grams of sugar.
Forrest Aguar & Michelle Norris for CNN
For a 20-ounce bottle of Gatorade, there are 33 grams of sugar in about 97% of the bottle.
Forrest Aguar & Michelle Norris for CNN
For an 8-ounce bottle of Nesquik low-fat chocolate milk, one and a half bottles equals 33 grams of sugar.
Forrest Aguar & Michelle Norris for CNN
For a 6.75-ounce carton of Mott's apple juice, one plus another two-fifths of a carton equals 33 grams of sugar.
Forrest Aguar & Michelle Norris for CNN
For a 0.9 oz bag of Welch's Mixed Fruit snacks, there are 33 grams of sugar in three bags.
For Honey Nut Cheerios, three plus two-thirds servings equals 33 grams of sugar. (One serving is three-quarters of a cup.)
Forrest Aguar & Michelle Norris for CNN
For a standard 52.7-gram Snickers, one plus one-fifth of a bar equals 33 grams of sugar.
Forrest Aguar & Michelle Norris for CNN
How much sugar is your kid eating?
CNN
—
Does sugar make kids hyper? Maybe.
“If you look at the peer-reviewed evidence, we cannot say sugar absolutely makes kids hyper; however, you can’t discount that sugar may have a slight effect” on behavior, said Kristi L. King, senior pediatric dietitian at Texas Children’s Hospital and spokeswoman for the Academy of Nutrition and Dietetics.
In the mid-1990s, a meta-analysis reviewed 16 studies on sugar’s effects in children. The research, published in the medical journal JAMA, concluded that sugar does not affect behavior or cognitive performance in children. “However, a small effect of sugar or effects on subsets of children cannot be ruled out,” the article said.
Like adults, some children may be more sensitive to blood sugar spikes than others. This may mean they are more likely to become aroused when consuming sugar.
Notably, a small percentage of children with attention-deficit hyperactivity disorder may be extra sensitive to sugar, and their behavior changes when they eat it, according to Jill Castle, a registered dietitian and childhood nutrition expert who teaches a parenting course called the ADHD Diet for Kids. “They may become more aggressive or hyperactive or difficult to parent,” Castle explained. Minimizing sugar in the diet can be beneficial for these children.
According to Castle, lots of sugary foods can also equate to elevated amounts of food dyes, artificial flavors or other additives that could be problematic for a child with ADHD, often making it difficult to tease out whether sugar is the culprit.
Complicating the issue is the fact that we don’t have a way to determine whether there is a link. “Is there a biomarker? A hormone level?” King asked. “It’s disheartening for parents. … They want answers. And unfortunately, nutrition is such an individual thing.”
Sugar and hyperactivity: Positive link or parent perception?
The idea of a link between sugar and hyperactivity in children dates to the 1970s, when the Feingold diet was prescribed by a pediatrician with the same name as an eating plan to alleviate symptoms of ADHD.
This diet may have led parents to perceive that sugar is a culprit when it comes to kids’ excitable behavior – even if it is not the true cause of one’s hyperactivity.
In one study from the mid-’90s, researchers gave children a drink containing a sugar substitute. One group of moms was told that their kids were drinking a high-sugar drink; the other group was told the truth, that their kids were consuming a sugar substitute. Mothers who were told that their kids consumed sugar rated their kids as more hyperactive, even though they didn’t consume any sugar.
“Just thinking their children were consuming sugar caused moms to perceive their children as being more hyperactive,” King said.
“When children consume sugar, it’s usually around something fun: holidays, birthdays, celebrations; there’s already that excitement there,” she said. “I don’t think you can say the sugar made them run around and play with friends. … That would be very hard to separate out.”
Instead, a release of the hormone adrenaline might explain a child’s overly energetic behavior. “It’s a flight or flight hormone; when you are excited or fearful, it increases heart rate and directs blood flow to the muscles, which may make children more antsy and have the urge to keep moving, so you may be perceiving that as hyperactivity,” King said.
To try to determine whether your child is truly sugar-sensitive or just excited about a celebration, Castle recommends eliminating sugary foods from the diet for a few weeks and then testing the child with a sugary food like soda, frosted cake or a tablespoon of sugar in 100% juice, and watching the child’s response. “It may be a quick way to determine how sugar may be affecting the child,” Castle said.
Then again, like the parents in that study, you may just think they’re being hyper just because you know that they consumed sugar.
Tips for parents
Even though most kids don’t have a sugar sensitivity, that doesn’t mean sugar is good for their health. Sugary foods and beverages deliver calories without any nutrients. What’s more, eating foods high in added sugars throughout childhood is linked to the development of risk factors for heart disease, such as an increased risk of obesity and elevated blood pressure in children and young adults.
To keep kids healthy, the American Heart Association recommends that children ages 2 to 18 consume less than 6 teaspoons – or 24 grams – of added sugars daily. To put that number in perspective, consider that 24 grams is the amount of sugar in just one 1.55-ounce chocolate bar. A 12-ounce can of regular soda contains about 40 grams of sugar, well over a day’s worth.
If you are looking for ways to cut back on sweets for your children, here are some tips to get started:
Gradually reduce the amount of sweets in your child’s diet. This is good advice for all kids, with and without ADHD. “I teach the 90/10 Rule for the appropriate balance of nourishing foods and sweets and treats, which equates to one to two normal-sized portions of sweets or treats each day, on average,” Castle said. If there seems to be a strong sensitivity to sweets, Castle recommends removing sweets and added sugar from the diet as best as you can.
Establish routine meals and snacks on a predictable schedule. “Anecdotally, this is one of the main things I work on with families, and they tell me they feel their child is calmer and better-behaved. There is something to be said for nourishing the brain and body on predictable, consistent intervals of three to four hours,” Castle said.
When introducing foods with added sugars, pair them with protein, healthy fat or fiber. This helps to blunt the effects of blood sugar surges and drops, and it optimizes satiety.
Castle and King suggest the following combinations:
Cookies with milk
Candy or chocolate with nut butter on crackers
Ice cream with nuts or oatmeal crumble topping
Cake with milk or milk alternative
Experts say you can also include your treat as part of a snack or meal. “If you’re at a party, try veggies and hummus and then having some dessert!” King said. “Or eat a small, sensible meal with lean protein, like turkey meat; add some cheese and baby carrots, and then add a fun treat or small sugar-sweetened beverage.” | The idea of a link between sugar and hyperactivity in children dates to the 1970s, when the Feingold diet was prescribed by a pediatrician with the same name as an eating plan to alleviate symptoms of ADHD.
This diet may have led parents to perceive that sugar is a culprit when it comes to kids’ excitable behavior – even if it is not the true cause of one’s hyperactivity.
In one study from the mid-’90s, researchers gave children a drink containing a sugar substitute. One group of moms was told that their kids were drinking a high-sugar drink; the other group was told the truth, that their kids were consuming a sugar substitute. Mothers who were told that their kids consumed sugar rated their kids as more hyperactive, even though they didn’t consume any sugar.
“Just thinking their children were consuming sugar caused moms to perceive their children as being more hyperactive,” King said.
“When children consume sugar, it’s usually around something fun: holidays, birthdays, celebrations; there’s already that excitement there,” she said. “I don’t think you can say the sugar made them run around and play with friends. … That would be very hard to separate out.”
Instead, a release of the hormone adrenaline might explain a child’s overly energetic behavior. “It’s a flight or flight hormone; when you are excited or fearful, it increases heart rate and directs blood flow to the muscles, which may make children more antsy and have the urge to keep moving, so you may be perceiving that as hyperactivity,” King said.
To try to determine whether your child is truly sugar-sensitive or just excited about a celebration, Castle recommends eliminating sugary foods from the diet for a few weeks and then testing the child with a sugary food like soda, frosted cake or a tablespoon of sugar in 100% juice, and watching the child’s response. | no |
Psychobiology | Can sugar cause hyperactivity in children? | yes_statement | "sugar" can "cause" "hyperactivity" in "children".. "hyperactivity" in "children" can be "caused" by "sugar". | https://www.psychologytoday.com/us/blog/parenting-translator/202302/does-sugar-really-cause-bad-behavior-in-children | Does Sugar Really Cause "Bad" Behavior in Children? | Psychology ... | Does sugar really cause hyperactivity and challenging behavior?
Key points
Although diets high in sugar are linked to many health complications, research finds that eating sugar does not impact the behavior of children.
Some studies even find behavioral and academic benefits immediately after eating sugar.
The situations in which excess sugar may be consumed can make a child seem more hyperactive due to excitement or sensory over-stimulation.
Nearly every parent has had an experience in which their child eats more sugar than usual and seems to be bouncing off the walls or has an uncharacteristic tantrum or meltdown. We might laugh it off as a “sugar high” or even swear that they will never be allowed to eat that particular sugary food, or that quantity of it, again. This experience commonly happens at holidays, such as Halloween and Easter, when candy and sugary treats may be provided without restriction.
So does research back up this incredibly common experience? Does sugar really negatively impact children’s behavior?
Surprisingly, research consistently finds that eating sugar does not impact the behavior of children. A meta-analysis found that sugar did not seem to significantly impact the behavior, cognitive functioning, or academic performance. All studies included in this analysis compared children’s behavior, cognitive, and academic performance after eating sugar versus a placebo.
So this meta-analysis suggests that sugar does not seem to impact children on average, but are there some children who are more sensitive to sugar and thus react negatively to it?
To address this question, one study compared school-age children who were reportedly more sensitive to sugar with preschool children who were not reported to be sensitive to sugar. The researchers then asked families to implement the following diets for three weeks each:
A diet low in sugar but high in aspartame (an artificial sweetener which has also been suggested as a cause of hyperactivity in children).
A diet low in sugar but high in saccharin (an artificial sweetener which has not been linked to hyperactivity.
Parents were told to avoid any artificial coloring, additives, and preservatives in all diets. The researchers found no differences in behavior, attention, hyperactivity, mood, executive functioning, or academic performance in either typical preschool children or the sugar-sensitive children on any of the three diets. In fact, the researchers tested 39 different variables and found no difference among the diets on any of these variables.
Some studies even find behavioral and academic benefits immediately after eating sugar. One study found that children who drank a high-sugar beverage showed improved memory and classroom performance when compared to children who drank a sugar-free drink. Another study examined the impacts of sugar on the behavior of juvenile delinquents. The researchers found thatadolescents who ate a high-sugar breakfast, particularly those with teacher-reported hyperactivity, showed improved behavior on some measures when compared to children who ate a sugar-free breakfast. Finally, research also found that children who ate a high-sugar snack showed improved memory compared to children who ate an artificially sweetened placebo. Researchers speculate that the brains of children may require more glucose to operate efficiently. Glucose is what your body breaks down sugary foods into and it's the brain's primary source of energy, thus explaining why behavior and academic performance may be improved after consuming sugar.
Critics of the studies described above may argue that these experiments do not represent how sugar is consumed in “real life” and that following children for three weeks is too short of a time period to see significant results. Another limitation of these experiments is that they compare the impact of sugar to a placebo which is most often an artificial sweetener such as aspartame or saccharin. They use artificial sweeteners because it is essential that the placebo taste sweet so that the research participants’ own expectations don’t impact the results. But it remains unclear the impact that these artificial sweeteners may have on behavior.
Addressing some of these concerns, another study examined links between sugar consumption in children 8-to-12 years old as reported by children from their daily lives and behavior and sleep. The researchers found that 81% of the children in this study exceeded the recommended sugar intake (with the average child consuming the amount of sugar in 22 Oreo cookies per day!). Yet, sugar consumption was not correlated with any behavioral or sleep measures. It is important to note that this study is correlational, meaning that this cannot be interpreted as evidence that sugar does not cause behavioral and sleep problems.
How can this be true?
The research on sugar and behavior is limited but consistently shows that sugar is not linked to behavior in children. Still, you may be thinking of a specific instance of a “sugar high” that undoubtedly caused hyperactivity and challenging behavior and wondering how nearly every parent has experienced this phenomenon if sugar really has no impact on behavior. One reason could be parental expectation. Research finds that when children are given a placebo and their parents are told it is a high dose of sugar, parents report their children to be significantly more hyperactive. Social reinforcement may encourage these expectations. For example, when a parent says, “It seems like he is on a sugar high,” other adults around them are likely to back up this observation (“Of course; that happens to every child on Halloween”).
In addition, the situations in which children typically consume a lot of sugar (such as holidays and birthday parties) may make a child seem more hyperactive due to excitement or sensory over-stimulation. In other words, it may be the situation and not the sugar that causes the behavior.
Sugar is a complicated topic and you have to make the best decision for your child and family. Your child may have health issues that make it particularly important to avoid sugar or you may believe that the health risks are so serious that it makes sense to avoid sugar entirely. You may also strongly believe that your child responds negatively to sugar and that the research described above doesn’t necessarily prove you wrong. (Research typically only shows what is true for most children, not all children.) However, if you want to provide sugar for your child in moderation, the following tips may be helpful:
Use “covert control” of sugar rather than restriction to help your child learn to eat sugar in moderation. Research suggests that parents should avoid restricting all sugary foods from a child’s diet. Research finds that, when parents restrict sugary foods, their children might eat less in the short-term but become more preoccupied with the food over time. Another study found that when parents restricted food, children show excessive eating of these restricted foods when given access to them. So how do we avoid restricting intake of sugar without our child eating a package of Oreos between every meal? Instead of restriction, researchers recommend that parents use “covert control” to manage their child’s sweet intake. This can include not keeping a lot of sweets around the house, avoiding eating sweets yourself in front of your children, or avoiding places that sell sweets such as candy shops. Research shows that these more subtle approaches are effective at increasing healthy eating patterns.
Consider offering high-sugar foods with meals rather than as special “treats” to minimize their novelty and allure. By making foods like candy, desserts, and other treats more available as part of a meals, your child learns that they can be included in a healthy diet and should not be on a pedestal. Research finds that children actually eat less dessert when it is served with a meal than when it is served after a meal.
Change your own perspective. Your own expectations may have an impact on your child. Be careful about your own reaction to your child eating sugar and your own expectations. Rather than seeing all sugar as “evil,” view it as an important energy source that is essential for your child in moderation. Research finds that when mothers who believe their children are “sugar sensitive” are told their child was given sugar (yet they were actually given a placebo), the mothers showed more controlling behavior and criticism. Because controlling behavior and criticism are associated with more challenging behavior in children, it is possible that the parents’ own expectations cause the behavior rather than the sugar itself.
Try to find the cause of your child’s challenging behavior. Rather than simply blaming your child’s challenging behavior on sugar, it may be more helpful to try to find the cause of the behavior, such as attention-seeking, sensory over-stimulation, trying to escape demands, or a lack of skills such as not knowing how to ask for help.
Although diets high in sugar are linked to many health complications, there is currently no consistent evidence that diets high in sugar are linked to behavioral or academic problems. Instead of completely restricting sugar, parents may want to try using “covert control,” offering high-sugar foods with meals, changing their own expectations, and working to find the real cause of a child’s challenging behavior. | Does sugar really cause hyperactivity and challenging behavior?
Key points
Although diets high in sugar are linked to many health complications, research finds that eating sugar does not impact the behavior of children.
Some studies even find behavioral and academic benefits immediately after eating sugar.
The situations in which excess sugar may be consumed can make a child seem more hyperactive due to excitement or sensory over-stimulation.
Nearly every parent has had an experience in which their child eats more sugar than usual and seems to be bouncing off the walls or has an uncharacteristic tantrum or meltdown. We might laugh it off as a “sugar high” or even swear that they will never be allowed to eat that particular sugary food, or that quantity of it, again. This experience commonly happens at holidays, such as Halloween and Easter, when candy and sugary treats may be provided without restriction.
So does research back up this incredibly common experience? Does sugar really negatively impact children’s behavior?
Surprisingly, research consistently finds that eating sugar does not impact the behavior of children. A meta-analysis found that sugar did not seem to significantly impact the behavior, cognitive functioning, or academic performance. All studies included in this analysis compared children’s behavior, cognitive, and academic performance after eating sugar versus a placebo.
So this meta-analysis suggests that sugar does not seem to impact children on average, but are there some children who are more sensitive to sugar and thus react negatively to it?
To address this question, one study compared school-age children who were reportedly more sensitive to sugar with preschool children who were not reported to be sensitive to sugar. The researchers then asked families to implement the following diets for three weeks each:
A diet low in sugar but high in aspartame (an artificial sweetener which has also been suggested as a cause of hyperactivity in children).
A diet low in sugar but high in saccharin (an artificial sweetener which has not been linked to hyperactivity.
Parents were told to avoid any artificial coloring, additives, and preservatives in all diets. | no |
Psychobiology | Can sugar cause hyperactivity in children? | yes_statement | "sugar" can "cause" "hyperactivity" in "children".. "hyperactivity" in "children" can be "caused" by "sugar". | https://pubmed.ncbi.nlm.nih.gov/8747098/ | Hyperactivity: is candy causal? | Authors
Affiliation
Abstract
Adverse behavioral responses to ingestion of any kind of candy have been reported repeatedly in the lay press. Parents and teachers alike attribute excessive motor activity and other disruptive behaviors to candy consumption. However, anecdotal observations of this kind need to be tested scientifically before conclusions can be drawn, and criteria for interpreting diet behavior studies must be rigorous. Ingredients in nonchocolate candy (sugar, artificial food colors), components in chocolate candy (sugar, artificial food colors in coatings, caffeine), and chocolate itself have been investigated for any adverse effects on behavior. Feingold theorized that food additives (artificial colors and flavors) and natural salicylates caused hyperactivity in children and elimination of these components would result in dramatic improvement in behavior. Numerous double-blind studies of the Feingold hypothesis have led to the rejection of the idea that this elimination diet has any benefit beyond the normal placebo effect. Although sugar is widely believed by the public to cause hyperactive behavior, this has not been scientifically substantiated. Twelve double-blind, placebo-controlled studies of sugar challenges failed to provide any evidence that sugar ingestion leads to untoward behavior in children with Attention-Deficit Hyperactivity Disorder or in normal children. Likewise, none of the studies testing candy or chocolate found any negative effect of these foods on behavior. For children with behavioral problems, diet-oriented treatment does not appear to be appropriate. Rather, clinicians treating these children recommend a multidisciplinary approach. The goal of diet treatment is to ensure a balanced diet with adequate energy and nutrients for optimal growth. | Authors
Affiliation
Abstract
Adverse behavioral responses to ingestion of any kind of candy have been reported repeatedly in the lay press. Parents and teachers alike attribute excessive motor activity and other disruptive behaviors to candy consumption. However, anecdotal observations of this kind need to be tested scientifically before conclusions can be drawn, and criteria for interpreting diet behavior studies must be rigorous. Ingredients in nonchocolate candy (sugar, artificial food colors), components in chocolate candy (sugar, artificial food colors in coatings, caffeine), and chocolate itself have been investigated for any adverse effects on behavior. Feingold theorized that food additives (artificial colors and flavors) and natural salicylates caused hyperactivity in children and elimination of these components would result in dramatic improvement in behavior. Numerous double-blind studies of the Feingold hypothesis have led to the rejection of the idea that this elimination diet has any benefit beyond the normal placebo effect. Although sugar is widely believed by the public to cause hyperactive behavior, this has not been scientifically substantiated. Twelve double-blind, placebo-controlled studies of sugar challenges failed to provide any evidence that sugar ingestion leads to untoward behavior in children with Attention-Deficit Hyperactivity Disorder or in normal children. Likewise, none of the studies testing candy or chocolate found any negative effect of these foods on behavior. For children with behavioral problems, diet-oriented treatment does not appear to be appropriate. Rather, clinicians treating these children recommend a multidisciplinary approach. The goal of diet treatment is to ensure a balanced diet with adequate energy and nutrients for optimal growth. | no |
Psychobiology | Can sugar cause hyperactivity in children? | yes_statement | "sugar" can "cause" "hyperactivity" in "children".. "hyperactivity" in "children" can be "caused" by "sugar". | https://www.everydayhealth.com/adhd-pictures/how-food-can-affect-your-childs-adhd-symptoms.aspx | 7 Foods to Avoid If Your Child Has ADHD | Next up video playing in 10 seconds
For years, doctors have speculated that certain foods may have something to do with attention deficit hyperactivity disorder, or ADHD. Much research has been done on the subject of a helpful diet for ADHD, but according to the Mayo Clinic, experts don't believe that foods actually cause ADHD. What some foods seem to do, however, is worsen ADHD symptoms or cause behavior that mimics the signs of ADHD in children.
Some evidence suggests that children with ADHD may have low levels of essential fatty acids. However, early studies have not consistently concluded that supplementation of omega-3 fatty acids in the diets of children with ADHD will improve behavior. Omega-3 fatty acids affect the transmissions of some neurotransmitters (brain chemicals). While a balance of omega-3 fatty acids and omega-6 fatty acids is best for overall health, the typical American diet contains too few omega-3s. Some research shows that ADHD and omega-3 deficiency share two symptoms:
Excessive thirst
Increased need to urinate
More research is needed in this area. The general dietary recommendations for children are to include fruits and vegetables, whole grains, bean, lean meat, and fish. Ask your ADHD dietitian about the best type of fish for ADHD.
Many parents wonder if artificial food additives and colorings contribute to ADHD. Though the causes of ADHD are still unknown, you can try removing the sources of artificial colorings and food additives, including sugar-sweetened drinks, candy, and colorful cereals, and determine if your child’s behavior improves. Eliminate processed food products, and instead provide a wholesome diet of fresh, healthy foods to optimize the health and well-being of your child.
Be aware that megadoses of vitamins and minerals can be toxic to a child and can interact with ADHD pills. To date, there is little consistent evidence that ADHD can be treated with nutritional supplements. Again, aim for a balanced diet that includes a variety of fresh, whole foods.
What about caffeine and ADHD? Excessive caffeine and excessive consumption of fast foods and other foods of poor nutritional value can cause kids to display behavior that might be confused with ADHD, according to Frank Barnhill, MD, an expert on ADHD and the author of Mistaken for ADHD.
To learn more about a diet for ADHD, talk with your child's doctor about the pros and cons of trying a diet that eliminates food additives to see if it makes a difference in your child's behavior. Make sure your doctor or an ADHD dietitian helps supervise the diet plan. A diet that eliminates too many foods can be unhealthy because it may lack necessary vitamins and nutrients.
Read on for a list of foods that may be linked with ADHD symptoms.
549
Avoid Candy on a Diet for ADHD
Jeff Wasserman/Stocksy
Candy is loaded with sugar and artificial colors, a bad combination for children with ADHD. Both of these common ingredients have been shown to promote ADHD symptoms — namely hyperactivity — in studies. "With the high content of sugar and artificial coloring, candy is a huge contributor to ADHD," said Howard Peiper, a naturopath and the author of The ADD and ADHD Diet!
550
Sodas, Caffeine, and High-Fructose Corn Syrup Cause ADHD Symptoms
Thinkstock
If you have ADHD, consider eliminating soda. (Even if you don't have ADHD, saying no to soda is a good idea.) These drinks often have many of the same sugars and sweeteners that make candy a bad idea for kids on the ADHD diet. And soda has other ingredients that worsen ADHD symptoms, such as high-fructose corn syrup and caffeine. "Excessive sugar and caffeine intake both cause symptoms of hyperactivity and easy distractibility," says Dr. Barnhill. One 2013 study found that 5-year-old children who drank sodas were more likely to show aggression and social withdrawal.
551
Frozen Fruits and Vegetables May Exacerbate ADHD Symptoms
Jonelle Weaver/Getty Images
Although fruits and vegetables arehealthy choices for an ADHD diet, some frozen brands contain artificial colors, so check all labels carefully. Barnhill says some frozen foods can exacerbate ADHD symptoms for another reason: "Foods treated with organophosphates for insect control have been shown to cause neurologic-based behavioral problems that mimic ADHD and many other behavior problems."
ADHD-Friendly Healthy Snacks for Kids
ADHD medication can dampen hunger, leading to mood swings in children. These healthy snacks are simple, efficient and delicious.
552
Nix Cake Mixes and Frostings on a Diet for ADHD
Shutterstock
Cake mix and frosting contain thehigh amounts of sugarand artificial colors that can lead to hyperactivity and other ADHD symptoms. Naheed Ali, MD, PhD, an expert on ADHD and the author of Diabetes and You: A Comprehensive, Holistic Approach, added that these products are often also loaded with several artificial sweeteners. "When frosting and cake mix contain artificial sweeteners, they increase the risk of ADHD symptoms more than natural sweeteners would," he says.
553
Energy Drinks Can Worsen ADHD Symptoms in Teens
Thinkstock
Energy drinksare becoming increasingly popular among kids, especially teens. Unfortunately, they also have a veritable treasure trove of ingredients that can worsen ADHD symptoms: sugar, artificial sweeteners, artificial colors, caffeine, and other stimulants. "Energy drinks are high on the list of things that cause teens to display behaviors mimicking ADHD," says Barnhill. They have no place in a healthy ADHD diet.
554
Ask an ADHD Dietitian About Eating Fish and Other Seafood
Thinkstock
Dr. Ali says that eating fish and other seafood with trace amounts of mercury can exacerbate ADHD symptoms in the long term. Some of the worst culprits are shark, king mackerel, swordfish, and tilefish. "Mercury, like cellulose, is extremely hard to digest and can accumulate in the brain over time," explains Ali. "This can lead to hyperactivity." Talk to your doctor or ADHD nutritionist about the best types of fish to include in an ADHD diet.
555
ADHD Symptoms May Be Caused by Food Sensitivities
Shutterstock
Many children with food sensitivities can exhibit ADHD symptoms after they are exposed to certain foods. Some of the common foods that can cause ADHD reactions include milk, chocolate, soy, wheat, eggs, beans, corn, tomatoes, grapes, and oranges. If you suspect a food sensitivity may be contributing to your child's ADHD symptoms, talk to your ADHD dietitian or doctor about trying an elimination diet. | "With the high content of sugar and artificial coloring, candy is a huge contributor to ADHD," said Howard Peiper, a naturopath and the author of The ADD and ADHD Diet!
550
Sodas, Caffeine, and High-Fructose Corn Syrup Cause ADHD Symptoms
Thinkstock
If you have ADHD, consider eliminating soda. (Even if you don't have ADHD, saying no to soda is a good idea.) These drinks often have many of the same sugars and sweeteners that make candy a bad idea for kids on the ADHD diet. And soda has other ingredients that worsen ADHD symptoms, such as high-fructose corn syrup and caffeine. "Excessive sugar and caffeine intake both cause symptoms of hyperactivity and easy distractibility," says Dr. Barnhill. One 2013 study found that 5-year-old children who drank sodas were more likely to show aggression and social withdrawal.
551
Frozen Fruits and Vegetables May Exacerbate ADHD Symptoms
Jonelle Weaver/Getty Images
Although fruits and vegetables arehealthy choices for an ADHD diet, some frozen brands contain artificial colors, so check all labels carefully. Barnhill says some frozen foods can exacerbate ADHD symptoms for another reason: "Foods treated with organophosphates for insect control have been shown to cause neurologic-based behavioral problems that mimic ADHD and many other behavior problems. "
ADHD-Friendly Healthy Snacks for Kids
ADHD medication can dampen hunger, leading to mood swings in children. These healthy snacks are simple, efficient and delicious.
552
Nix Cake Mixes and Frostings on a Diet for ADHD
Shutterstock
Cake mix and frosting contain thehigh amounts of sugarand artificial colors that can lead to hyperactivity and other ADHD symptoms. | yes |
Psychobiology | Can sugar cause hyperactivity in children? | yes_statement | "sugar" can "cause" "hyperactivity" in "children".. "hyperactivity" in "children" can be "caused" by "sugar". | https://nourishinghope.com/does-sugar-cause-hyperactivity-in-children/ | Does Sugar Cause Hyperactivity in Children? | Do Diets High in Sugar Cause Hyperactivity in Children?
ADHD Diet for Children
Children with attention deficit hyperactivity disorder or ADHD face difficulties interacting with peers, performing well in school, and getting along with family members at home. Although more research into the multitude of causes of the disorder is needed, studies and anecdotal evidence suggest that there is a link between ADHD in children and diet. Read on to learn more about the role that sugar may play in ADHD symptoms.
What Is ADHD?
ADHD is a neurodevelopmental disorder that impacts 6.1 million children in the United States alone. Science has yet to uncover the exact cause of ADHD but believes that genetics, environmental factors, and dysfunction of the central nervous system may be involved. Kids with ADHD may exhibit many symptoms, including:
Forgetfulness
Squirming and fidgeting
Excessive talking
Difficulty taking turns
Trouble resisting temptation
Frequent daydreaming
Disorganization
Most often, stimulant medications and behavior therapy are used to treat the condition by mainstream medicine.
5 Ways Sugar Causes Hyperactivity and ADHD
For years, people have wondered if sugar could be to blame for ADHD. Scientific research hasn’t found sugar to be the sole cause of symptoms in a hyperactive child. In other words, children don’t get ADHD symptoms due to consuming sugar alone, and an ADHD elimination diet for kids that is free of sugar is unlikely to resolve the condition overnight. That said, there is some evidence to suggest that sugar may be detrimental for children who already have ADHD. There are several studies and theories on the subject, including:
1. High Glycemic Foods Cause Hyperactivity
Foods such as: fructose, high fructose corn syrup, fruit juice, donuts, bread, and instant oatmeal are high glycemic foods. These are foods that raise blood sugar rapidly. Researchers have found that high glycemic foods can cause hyperactivity in children, and low glycemic foods help to reduce symptoms of ADHD. [1]
2. Foraging Instinct From Fructose Contributes to ADHD
One study published in Human Evolution and Behavior found that a type of sugar called fructose can reduce energy levels in body cells. The researchers observed that this caused the cells to shift into starvation mode and that this could trigger instincts to forage for food to ensure the body’s survival. This hyperactive foraging response causes symptoms of impulsivity, aggression, recklessness, and cravings, contributing to ADHD (as well as aggression and bipolar disorder). [2]
3. Sugar and Low Dopamine Exacerbates Hyperactivity
Low dopamine activity in the brain is a common finding in ADHD. [3] And sugar is known to release dopamine in the brain. [4] Medical research into addiction has revealed that individuals with low dopamine levels may be more prone to addiction, in this case sugar. In a study by Richard J. Johnson, MD, he hypothesizes that sugar significantly increases dopamine, which leads to reduced dopamine receptors and dopamine, exacerbating the low dopamine in ADHD and worsening the symptoms of hyperactivity. [5]
4. Sugar Contributes to Poor Impulse Control
Poor impulse control is a known symptom of ADHD. Researchers speculate that kids with the condition may be more likely to eat sugary foods in excess because they have trouble resisting the temptation to do so. Poor impulse control in ADHD may worsen the ADHD itself because of sugar leading to a vicious cycle.
5. Low Blood Sugar Can Lead to Poor Concentration and Inattentiveness
Sugary foods cause blood sugar levels to increase rapidly and then plummet quickly after digestion, leading to a sugar crash. The more someone eats sugar, the more it can impede the body’s ability to regulate blood sugar levels, and the more the individual craves sugar. As the individual eats more sugar, this cycle continues and can lead to low blood sugar or hypoglycemia. Low blood sugar can lead to poor concentration, inattentiveness, confusion, nervousness, and irritability. These symptoms can add to the poor concentration and focus someone with ADHD often has to begin with, and may exacerbate other behaviors of ADHD.
Tips for an ADHD Diet
Consuming too much sugar has proven health consequences, which is reason enough to limit how much sugar kids with ADHD eat (even the better choices).
And now, research is showing what parents and clinicians have seen for years: sugar can cause or exacerbate hyperactivity and other symptoms of ADHD in children (and adults).
If you would like to reduce the amount of sugar in your child’s diet, follow these tips:
Limit sugar: Experts recommend keeping sugar to less than 25 grams per day. To achieve this, try limiting treats to 5 grams of sugar per serving. This is a great way to help kids read labels and learn how much sugar is in their favorite foods.
Focus on protein and fat. Including healthy amounts and types of protein and fat to the diet of children is important to ensure they have the building blocks they need for growth, energy, and cognitive function. Protein and fat make kids feel fuller for longer and can cut down on the urge to snack on sugary foods.
Start slow. Take small steps in changing your child’s diet. Suddenly cutting out all sugar from your child’s diet can be stressful for all of you. Instead, identify the largest sources of sugar from your kid’s diet and find healthier alternatives. Making one or two changes at a time can help children adapt to the change.
Eat whole foods. Serve fruit and make more treats with fruit such as a baked apple or peach cobbler.
Use better choices. Consider some of the better sugar options such as coconut sugar, or non-sugar sweetener options such as stevia or Lakanto. Try these in recipes in reduced amounts. Remember all sugar is sugar so limiting it is still important.
Try diet eliminations. An ADHD elimination diet can help you identify foods that could be worsening your child’s symptoms. With this approach, you remove the suspicious food from your child’s diet, keeping a log of what they eat and what symptoms they exhibit. After a few weeks have passed, analyze the data. If you don’t notice any change in your child’s symptoms, try eliminating another food. Once you have identified potential triggers, slowly reintroduce them to your child’s diet and see if symptoms return. This will help to establish a link between the food and your child’s behavior that you can share with their doctor.
As we’d discussed today, sugar can be detrimental for attention deficit hyperactivity disorder. We also know that a low sugar diet is helpful for ADHD for many additional reasons, including for maintaining normal blood sugar levels which has many health benefits. If you want to learn more on how sugar can impact health and how to reduce it in your child’s diet, here is another article to get you started.
The good news is hyperactivity and ADHD symptoms can improve, and the solutions you have in your own kitchen can help.
Julie Matthews is a Certified Nutrition Consultant who received her master’s degree in medical nutrition with distinction from Arizona State University. She is also a published nutrition researcher and has specialized in complex neurological conditions, particularly autism spectrum disorders and ADHD for over 20 years. Julie is the award winning author of Nourishing Hope for Autism, co-author of a study proving the efficacy of nutrition and dietary intervention for autism published in the peer-reviewed journal, Nutrients, and also the founder of BioIndividualNutrition.com. Download her free guide, 12 Nutrition Steps to Better Health, Learning, and Behavior.
The information on this website is for informational purposes only and is not intended to replace a one-on-one relationship with a qualified healthcare professional and is not intended to provide medical advice. Nourishing Hope®, Cooking To Heal®, and BioIndividual Nutrition ® are registered trademarks.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience. | 5 Ways Sugar Causes Hyperactivity and ADHD
For years, people have wondered if sugar could be to blame for ADHD. Scientific research hasn’t found sugar to be the sole cause of symptoms in a hyperactive child. In other words, children don’t get ADHD symptoms due to consuming sugar alone, and an ADHD elimination diet for kids that is free of sugar is unlikely to resolve the condition overnight. That said, there is some evidence to suggest that sugar may be detrimental for children who already have ADHD. There are several studies and theories on the subject, including:
1. High Glycemic Foods Cause Hyperactivity
Foods such as: fructose, high fructose corn syrup, fruit juice, donuts, bread, and instant oatmeal are high glycemic foods. These are foods that raise blood sugar rapidly. Researchers have found that high glycemic foods can cause hyperactivity in children, and low glycemic foods help to reduce symptoms of ADHD. [1]
2. Foraging Instinct From Fructose Contributes to ADHD
One study published in Human Evolution and Behavior found that a type of sugar called fructose can reduce energy levels in body cells. The researchers observed that this caused the cells to shift into starvation mode and that this could trigger instincts to forage for food to ensure the body’s survival. This hyperactive foraging response causes symptoms of impulsivity, aggression, recklessness, and cravings, contributing to ADHD (as well as aggression and bipolar disorder). [2]
3. Sugar and Low Dopamine Exacerbates Hyperactivity
Low dopamine activity in the brain is a common finding in ADHD. [3] And sugar is known to release dopamine in the brain. [4] Medical research into addiction has revealed that individuals with low dopamine levels may be more prone to addiction, in this case sugar. | yes |
Psychobiology | Can sugar cause hyperactivity in children? | yes_statement | "sugar" can "cause" "hyperactivity" in "children".. "hyperactivity" in "children" can be "caused" by "sugar". | https://www.angelaberrill.com/blog/sugar-and-hyperactivity | Sugar and hyperactivity — Angela Berrill | Sugar and hyperactivity
While sugar provides us with a burst of energy, does it cause hyperactivity?
Sugar and hyperactivity
I know, I know, we’ve heard it said a hundred times that, “giving your child sugar will make them go crazy and have them bouncing off-the-walls”. In fact, we’ve probably heard this said so many times that we have even begun to believe this is what really happens when we ply our kids with cake, lollies and soft drinks. However, the reality is sugar does not cause children to become hyperactive. Crazy, right?! Well it’s actually not as crazy as it sounds, especially when we look to the scientific evidence on this much researched topic.
The origin of the claim that sugar causes hyperactivity can be linked back to the 1970s. The Feingold diet, which eliminates artificial flavourings, sweeteners (including sugar) and preservatives, was prescribed to help alleviate the symptoms of Attention-Deficit/Hyperactivity Disorder (ADHD). However, since then MANY trials and subsequent reviews have busted the long-held myth that sugar causes hyperactivity. A large review of the scientific evidence around this topic, concluded that sugar does not affect behaviour or cognitive performance in children.
Why are children more hyperactive when they have sugar?
While sugar (and any highly processed or refined carbohydrate) will provide our bodies with a quick burst of energy, the link between sugar and hyperactivity has more to do with parents’ perception, rather than reality. Our deep-seated beliefs about the impact of sugar on behaviour, often lead us to believing our children are being hyperactive when they’ve had sugar, even when it is not the case at all. In one study , children considered sensitive to sugar were given aspartame, a sugar substitute. Half of the mother’s were told their children were given sugar and the other half, aspartame. The mothers who thought their child had had sugar, rated them as more hyperactive than the controls, despite the children not having had any sugar at all!
Psychologists believe the reason kids bounce off-the-walls, is more likely to do with the environment, rather than the sugar itself. Sugary foods and drinks are often eaten at special events like birthday parties, Christmas, Halloween or local fairs. These events generally involve a whole lot of excitement where children see family and friends, play games and generally have fun, outside of a structured school setting or traditional home environment. And that’s ok - it’s just kids being kids and having fun; regardless of what they’ve had to eat or drink.
Sugar and hyperactivity is more correlation than causation. For example, on a wet and rainy day we tend to use umbrellas. However, this does not mean that using an umbrella caused it to rain. The same goes for sugar and hyperactivity. Kids might be more hyped up at events where sugar is served, but that does not mean that the sugar caused hyperactivity.
Sugar and Attention-Deficit/Hyperactivity Disorder (ADHD)
Whether sugar causes ADHD (or whether those with ADHD are more sensitive to the effects of sugar), has been the subject of many scientific studies - and, it’s a topic that continues to remain controversial. Attention-deficit/hyperactivity disorder (ADHD) is characterised by persistent symptoms of lack of attention, impulsivity and hyperactivity. Cutting refined sugar out of the diet of children with ADHD is often promoted as a way to manage symptoms of hyperactivity.
Early research indicated that those with ADHD may be more sensitive to the effects of sugar. A study in the 1980’s, found that higher sugar intakes were correlated to more disruptive and restless behaviour in children with ADHD. However, since then many well-designed trials have found the opposite. One study, found no significant association between sugar intakes and ADHD development. These findings are further supported by another very recent study, which reported that ADHD is not caused by higher sugar intakes. The researchers concluded that higher sugar intakes were perhaps a consequence of ADHD, rather than a cause of. While research indicates that sugar does not cause ADHD, it does remain possible that a small percentage of children with ADHD may be more sensitive to the effects of sugar than others. In these instances, it would be helpful to discuss your child’s dietary needs with a Registered Dietitian who specialises in ADHD.
Sugar and health
Putting sugar and hyperactivity aside, there is no denying that we all need to cut back on the amount of 'free' sugar in our diet because of it’s many implications to our health. The World Health Organisation (WHO) ‘Guideline: sugars intake for adults and children’ (2015) reports that free sugars contribute extra energy to our diets and higher intakes of free sugars have been linked to poorer diet quality, increased body weight, and non-communicable disease (NCDs) such as tooth decay. Because of these health concerns, the WHO recommends that free sugars provide less than 10% of our total energy intake per day. The guidelines further suggest that a reduction to below 5% of total energy intake per day would have additional benefits.
10 % of total energy intake is the equivalent of around 52 grams of sugar (~10 teaspoons*) and,
5% percent equates to around to around 25 grams ( ~ 5 teaspoons*) of sugar per day, for an adult of normal Body Mass Index (BMI).
What is free sugar?
Free sugar refers to sugars added to food and drinks by the manufacturer, chef or you, at home. It also includes natural sugars found in the likes of honey, syrups, fruit juice and fruit juice concentrates. Unfortunately free sugar is found everywhere in foods these days. It’s not only used to impart flavour to the food we eat, but it is also used as a preservative and even a ‘filler’. Reading Food Labels, including the Nutrition Information Panel (NIP) and Ingredients List, can help you to determine whether there has been any sugar added to your food and drinks. When comparing foods within a food category (e.g. breakfast cereal), look for products that contain the least amount of sugar per 100g. Less is best!
When it comes to the likes of coconut sugar, and other less-processed or ‘-refined’ types of sugar, they are all still sugar. The source of sugar or level of processing / refinement doesn’t make the sugar any healthier. At the end of the day, when it comes to free sugar, sugar is sugar, regardless of it’s source.
Why do we need to cut back on sugary foods?
While we don’t need to avoid sugar altogether, the Ministry of Health recommends that we choose foods and drinks with little or no added (free) sugar. Sugary foods contain a whole lot of free sugar, and therefore energy (kJs or cal), without providing much in the way of health-promoting vitamins and minerals. Therefore, they are often referred to as ‘nutrient-poor’ foods. Filling up on sugary foods can also replace more nutritious foods and drinks in the diet.
Removing the guilt around sugar
We need to remove the guilt and shame that is so often associated with eating foods high in sugar. Sugary foods are not ‘bad’ and neither do you become ‘bad’ for eating them. It’s your whole diet that counts. We need to trust our children around sugar and allow them to develop the skills to connect to their bodies, and see how eating these foods makes them feel.
Ellyn Satter’s Division of Responsibility can be a helpful tool for helping you and your child to navigate sugary treats. As the parent it is our role to be responsible for what (the foods provided), when (the timing of meals and snacks) and where (ideally at the table, away from devices) of feeding your child. It is up to the child to determine how much and whether they eat what we provide. Include sugary foods along with their snacks, without restriction. Snacks higher in protein, healthy fats and/or fibre will help to blunt the effect of sugar spikes in the bloodstream, as well as helping to fill your child up and provide them with important nutrients for their health. For example, include a biscuit or cake alongside a glass of milk. Trust your child’s judgement and be guided by their hunger and fullness cues.
While it can be easy to get fixated on specific nutrients (such as sugar), it’s important to remember that we don’t just eat single nutrients or foods in isolation. We include a variety of foods (and nutrients) in our diet, across the day, week and even year. If we focus on the bigger picture, such as eating mostly whole foods and those that are as close to what is found in nature, then the nutrients will tend to take care of themselves, sugar included. | Sugar and hyperactivity
While sugar provides us with a burst of energy, does it cause hyperactivity?
Sugar and hyperactivity
I know, I know, we’ve heard it said a hundred times that, “giving your child sugar will make them go crazy and have them bouncing off-the-walls”. In fact, we’ve probably heard this said so many times that we have even begun to believe this is what really happens when we ply our kids with cake, lollies and soft drinks. However, the reality is sugar does not cause children to become hyperactive. Crazy, right?! Well it’s actually not as crazy as it sounds, especially when we look to the scientific evidence on this much researched topic.
The origin of the claim that sugar causes hyperactivity can be linked back to the 1970s. The Feingold diet, which eliminates artificial flavourings, sweeteners (including sugar) and preservatives, was prescribed to help alleviate the symptoms of Attention-Deficit/Hyperactivity Disorder (ADHD). However, since then MANY trials and subsequent reviews have busted the long-held myth that sugar causes hyperactivity. A large review of the scientific evidence around this topic, concluded that sugar does not affect behaviour or cognitive performance in children.
Why are children more hyperactive when they have sugar?
While sugar (and any highly processed or refined carbohydrate) will provide our bodies with a quick burst of energy, the link between sugar and hyperactivity has more to do with parents’ perception, rather than reality. Our deep-seated beliefs about the impact of sugar on behaviour, often lead us to believing our children are being hyperactive when they’ve had sugar, even when it is not the case at all. In one study , children considered sensitive to sugar were given aspartame, a sugar substitute. Half of the mother’s were told their children were given sugar and the other half, aspartame. The mothers who thought their child had had sugar, rated them as more hyperactive than the controls, despite the children not having had any sugar at all!
| no |
Psychobiology | Can sugar cause hyperactivity in children? | yes_statement | "sugar" can "cause" "hyperactivity" in "children".. "hyperactivity" in "children" can be "caused" by "sugar". | https://www.joonapp.io/post/does-sugar-make-a-child-hyperactive | Does Sugar Make A Child Hyperactive? Find Out Here! | Does Eating Sugar Make A Child Hyperactive?
The idea that sugar makes children hyperactive is widespread and generally accepted. Most parents have heard that sugary foods will have kids "bouncing off the walls," but science says that sugar intake may not have the effect on kids' behavior we think it does. So, does sugar make a child hyperactive, or is there another explanation?
In this article, we'll discuss whether sugar consumption can cause hyperactivity in children, the truth about sugar, and whether some kids are indeed sensitive to sugar or not. Then, we'll discuss what the science says and the recommended sugar intake for kids.
Does Consuming Sugar Cause Hyperactivity?
First and foremost, sugar intake does not cause attention deficit hyperactivity disorder (ADHD). ADHD is a neurodevelopmental condition marked by brain differences. For someone to be diagnosed with ADHD, they must meet specific criteria outlined in the diagnostic and statistical manual of mental disorders. Other risk factors, such as family history, have the most significant association with the development of ADHD.
Can sugar cause general hyperactivity in children, though? Surprisingly, the research shows that the answer is most likely "no," or at least, it does not appear to make kids hyper to the extent that we often assume.
The Truth About Sugar
In wellness spaces, we often hear that sugar, artificial sweeteners, and food additives make children hyperactive and cause behavior problems. Extensive research says otherwise.
A meta-analysis published in a medical journal called JAMA reviewed a total of sixteen studies on the effects of sugar on children and concluded that sugar doesn't affect children's behavior or cognitive performance. Even when intake exceeds standard dietary levels, studies show that both dietary sucrose (sugar) and aspartame (artificial sweetener) do not affect behavior or cognitive function in children.
Another study comparing the effects of three diets (one high in sugar with no artificial sweeteners, another low in sucrose that contained aspartame, and a third was low in sucrose that contained a placebo as a sweetener) on 39 behavioral and cognitive variables in kids found that there were no significant differences among the three diets.
Can sugar cause ADHD in children?
As stated above, ADHD and sugar are not directly related. Sugar does not cause ADHD. Causes of ADHD have been researched extensively and do not appear to include sugar intake.Â
â
A 2019 study on kids ages 6-11 concluded that no association exists between sugar consumption and the incidence of ADHD within the age group. Yet another study on fifth-grade children found the same - there was no connection between snacks high in simple sugar and ADHD development.
These findings further demystify the idea that sugar intake is connected to ADHD or hyperactivity. Why, then, might it seem that sugar causes hyperactivity in your kids?
Perception as a factor
Could the belief that sugar consumption leads to hyperactivity in children impact our perception of how it affects kids? Some research suggests that it could be a placebo effect.
In a small study on school-aged children 5-7 whose parents reported them as behaviorally "sugar sensitive," participants were randomly assigned to experimental and control groups. In the one group, parents were told that kids had consumed sugar in a large dose, but in the control group, parents were told they received a placebo. In reality, all of the kids had consumed a placebo. Still, the parents who believed their kids had eaten sugar rated their child's behavior as more hyperactive.
So, if you swear that your child's hyperactive after eating ice cream at a birthday party, it may not quite be the case. Remember that factors like environment and stimulation can impact children's behavior and mood, too. If hyperactivity is a consistent problem for a child and they haven't yet been evaluated for ADHD, it may be worth looking into.Â
There is also evidence to suggest that the times when children do consume more sugar are often fun, out of routine events that may be more likely to excite them or cause hyperactive behavior, such as Halloween or birthday parties.Â
Are Some Children Sensitive To Sugar?Â
Personal experience matters and can inform the way we feed our children. Whether you're a child or an adult, listening to your body matters; what works for one person might not work for another. Empirical evidence would suggest that some people, including kids, are more sensitive to sugar than others. At times, how individuals react to sugar can be impacted by medical conditions like PCOS or diabetes.
A registered dietitian may encourage several approaches for those more sugar sensitive than others. Approaches to help with sugar sensitivity can include but aren't limited to:
Pairing high-sugar items with protein and fat to balance blood sugar
Eating regularly to avoid drops in blood sugar
Limiting sugary snacks and other sugary foods
All in all, everyone is highly individual when it comes to how certain foods impact them. There is nothing wrong with limiting sugar intake, and in fact, that is highly recommended - even if the idea that sugar causes hyperactivity has been disproven.
Note: Does your child struggle with hyperactivity or finishing daily tasks like chores or homework? Try Joon. Joon is a new app and game designed for children with attention deficit hyperactivity disorder (ADHD) and their parents. As a to-do app that doubles as a game, Joon inspires motivation, independence, and self-esteem while helping kids complete important routine activities.Â
How does it work?
90% of children who use Joon finish all the tasks their parents assign. Joon is rated an average of 4.7 out of 5 stars in the app store, with a total of more than 3.9k reviews. Even better, it's backed by professionals such as teachers, occupational therapists, and child psychologists.
What The Science Says
While added sugars aren't to blame for making kids hyper, it doesn't mean that sugar intake is something to ignore entirely. Science tells us that sugar consumption impacts well-being in both kids and adults, and it is something to be mindful of.
Balanced blood sugar supports mood, behavior, and performance. Most people without interfering medical conditions can achieve this with a balanced diet of regular meals and snacks containing lean protein, healthy fats, carbohydrates, fruits, and vegetables. A balanced and nutritious diet can prevent negative health effects such as heart disease in the long run. Excess sugar in your diet has been linked to concerns such as cardiovascular disease and diabetes, so it is not something to take lightly even if it is not linked to hyperactivity.Â
However, physicians generally recommend that parents do not take it to the extreme. Eliminating food groups when it is not medically necessary, referring to foods as "bad," or avoiding sugar altogether can adversely affect kids. A balanced approach with "fun foods" here and there is ideal for the majority.
Additionally, addressing any relevant nutrient deficiencies and medical conditions in children matters and can impact their behavior and mood. For personalized guidance on your child's diet, speak with a medical provider such as a registered dietician.
Recommended sugar intake
To avoid negative short or long-term health effects, it is still ideal that children avoid diets high in sugar. What exactly counts as "high in sugar," though? Looking at the recommended sugar intake by age group can give parents an idea of what to go by. The American Heart Association recommends that a child's diet contain six teaspoons of added sugar or less daily. These recommendations are geared toward children ages 2-18. Kids younger than two years old should avoid added sugars altogether.
Added sugarsrefer to sugars added to processed foods and differ from the sugars naturally occurring in fruits and vegetables. Children should eat the daily recommended intake of fruits and vegetables for their age group unless otherwise directed, such as in the case of a medically documented fructose intolerance.
Takeaway
Sugar and hyperactivity are not as connected as many parents previously believed. In multiple studies on school-age children, researchers concluded that there's little-to-no connection between sugar and hyperactivity. Similarly, it's critical to understand that ADHD is a medical condition not caused by consuming sugar. This isnât to say that it doesn't matter what your child eats. Consuming a healthy, balanced diet is still critical for children. It's recommended that children ages 2-18 ingest six teaspoons of added sugar or less per day, whereas kids under age two should avoid added sugar altogether. Added sugar does not include naturally occurring sugar in fruits and vegetables. Personalized guidance from a medical provider is ideal for parents with questions or concerns about how diet and other factors affect their child. | Does Eating Sugar Make A Child Hyperactive?
The idea that sugar makes children hyperactive is widespread and generally accepted. Most parents have heard that sugary foods will have kids "bouncing off the walls," but science says that sugar intake may not have the effect on kids' behavior we think it does. So, does sugar make a child hyperactive, or is there another explanation?
In this article, we'll discuss whether sugar consumption can cause hyperactivity in children, the truth about sugar, and whether some kids are indeed sensitive to sugar or not. Then, we'll discuss what the science says and the recommended sugar intake for kids.
Does Consuming Sugar Cause Hyperactivity?
First and foremost, sugar intake does not cause attention deficit hyperactivity disorder (ADHD). ADHD is a neurodevelopmental condition marked by brain differences. For someone to be diagnosed with ADHD, they must meet specific criteria outlined in the diagnostic and statistical manual of mental disorders. Other risk factors, such as family history, have the most significant association with the development of ADHD.
Can sugar cause general hyperactivity in children, though? Surprisingly, the research shows that the answer is most likely "no," or at least, it does not appear to make kids hyper to the extent that we often assume.
The Truth About Sugar
In wellness spaces, we often hear that sugar, artificial sweeteners, and food additives make children hyperactive and cause behavior problems. Extensive research says otherwise.
A meta-analysis published in a medical journal called JAMA reviewed a total of sixteen studies on the effects of sugar on children and concluded that sugar doesn't affect children's behavior or cognitive performance. Even when intake exceeds standard dietary levels, studies show that both dietary sucrose (sugar) and aspartame (artificial sweetener) do not affect behavior or cognitive function in children.
| no |
Psychobiology | Can sugar cause hyperactivity in children? | no_statement | "sugar" does not "cause" "hyperactivity" in "children".. "hyperactivity" in "children" is not "caused" by "sugar". | https://www.drakeinstitute.com/sugar-consumption-and-adhd | Sugar Consumption & ADHD: Does Sugar Make ADHD Worst ... | As a matter of patient convenience and accessibility, we are now providing clinical services remotely for patients in California who live beyond a 50 miles radius from either our West Los Angeles or Irvine Clinics.
Sugar & ADHD: Does Sugar Make ADHD Symptoms Worse?
Sugar is a well-known and prevalent ingredient that has found its way into many of the food and drinks we consume. Unfortunately, the prevalence of sugar can have some negative effects. Sugar is both toxic and addictive. The sugar in candy, soft drinks, and fruit juices can cause dysregulation in the brain. Specifically, sugar stimulates dopamine in the brain, as well as opioid receptors, which causes cravings for it.
For those with ADHD, sugar intake should be monitored closely since it can make ADHD symptoms worse. In fact, ADHD and sugar consumption has been studied to determine just how they interact. In this article, we'll look at just how sugar affects ADHD, if sugar causes ADHD, and how to approach ADHD sugar consumption.
Maintaining a healthy diet is crucial for getting essential nutrients to the brain to optimize functioning. It's especially important for those with brain-based conditions like ADHD and Autism Spectrum Disorders. To learn more about what foods to eat and avoid, have a look at the Drake Institute's recommended diet plan for kids with ADHD.
The Drake Institute uses non-invasive, drug-free treatment protocols for ADHD, Autism Spectrum Disorders, and other conditions like anxiety, depression, insomnia, and more. For over 40 years, the Institute has helped patients reduce and/or resolve their symptoms and improve their quality of life.
To learn more about the technologies we use and how they help kids, teens, and adults with ADHD, call us at 800-700-4233 or fill out the contact form.
What Is ADHD?
ADHD, or Attention-deficit/hyperactivity disorder, is a common condition that affects the brain's ability to concentrate on a non-preferred task. In about half of patients, it also makes it difficult to self-regulate their behavior. It is most often diagnosed in children, though teens and adults can also have it. It is a neurodevelopmental disorder characterized by difficulties focusing, paying attention, controlling impulsive behaviors, and more.
ADHD often negatively affects nearly every part of daily life, including interactions with peers, academics, work-life, and daily living activities. Without proper treatment, the challenges of living with ADHD can lead to negative outcomes and unfulfilled potential.
According to the CDC, it is estimated that 9.4% of American children have ever been diagnosed with ADHD, making it one of the most common brain-based disorders in the US.
Symptoms of ADHD
Though it's expected for children to have a hard time sitting still for a long period, many kids with ADHD will struggle more than their peers. They may lack organizational skills or behavioral control typical for their age, which can lead to problems at school and with friends. ADHD can also disrupt family relations and put more pressure on marriages.
ADHD may present differently in different patients. Some may experience symptoms more closely related to inattention, while others may suffer from hyperactivity and impulsivity more prominently. Still, others may experience a combination. Below are the most common symptoms for each type of ADHD.
ADHD: (Inattentive presentation)
Inattention
Easily distracted
Lack of sustained focus on non-preferred tasks
Difficulty finishing tasks such as homework without supervision
Poor short-term memory, like difficulty following a series of instructions
Often forgetful, such as forgetting to do or turn in homework
Poor listening skills
ADHD: (Hyperactive-impulsive presentation)
Impulsive (acting without thinking of the consequences, blurting out answers, experiencing difficulty waiting for one's turn)
Hyperactive (fidgety and/or difficulty sitting still)
Interrupts others
Inability to play quietly
ADHD: (Combined presentation)
Inattentive symptoms along with hyperactivity/impulsivity
ADHD: (Unspecified ADHD)
Significant clinical impairment but does not meet full criteria of ADHD symptoms
To be diagnosed as having ADHD, two additional conditions must also be met:
Symptoms must occur more frequently or severely when compared to children/adolescents of the same age
Symptoms must reduce the quality of functioning in one's life, such as interfering with social, academic, and/or occupational functioning
What Causes ADHD?
The inattention and hyperactivity resulting from ADHD are caused by dysregulation within the brain, most often involving the frontal region. This area is responsible for attention and focus.
It may seem like a child with ADHD is simply acting out or intentionally choosing not to pay attention. Indeed, these children may be seen as lazy, unmotivated, or ill-tempered. In reality, their brain simply isn't allowing them to meet age-appropriate demands due to dysregulation in the brain. Effort, discipline, and parental strictness aren't enough to normalize that dysregulation.
How Does Sugar Affect Children With ADHD?
Sugar and ADHD are a poor combination. Sugar causes a release of dopamine in the brain similar to stimulant drugs. Even children without ADHD can become fidgety and inattentive after ingesting sugar. For children with ADHD, sugar can further disrupt an already dysregulated brain.
Sugar can also have a negative impact on the gut. Children with ADHD may already have gastrointestinal symptoms, and sugar could make them worse. Essentially, sugar affects ADHD by increasing brain dysregulation which exacerbates symptoms. It will also cause a spike in blood sugar, which then results in a spike in insulin and soon thereafter you can experience hypoglycemia, which makes inattention worse. For children with ADHD, a sugar crash can be particularly disruptive when they’re in the classroom. So if a child ingests a sugary breakfast then they may crash in the classroom mid-morning.
Sugar does indeed affect children with ADHD, but does sugar cause ADHD? There have been no studies to indicate that sugar consumption causes ADHD.
What About Artificial Sweeteners?
When it comes to ADHD sugar intake and ADHD sugar addiction, it's important to pay attention to artificial sweeteners as well. Some sweeteners, like Aspartame and Saccharine, are known to affect some individuals negatively and could lead kids with ADHD to have headaches or learning problems. Artificial sweeteners can make ADHD and sugar cravings worse. Artificial colorings and flavorings can also disrupt brain functioning. In Europe, there are warning labels on foods containing artificial colorings and flavorings.
Ingredients To Watch Out For
It may seem easy to identify sugar as an ingredient in packaged or prepared foods, but it's not the only thing you have to look out for. Several ingredients serve as "code words" for sugar, making avoiding it more challenging. Here are some of the most common ingredients that are essentially the same as sugar in how it interacts with the brain.
Corn sweetener
Corn syrup
Corn syrup solids
Dehydrated cane juice
Dextrin
Dextrose
Maltodextrin
Molasses
Rice syrup
Sucrose
Agave
ADHD & Sugar Addiction
Sugar in the quantity we see now is a relatively new addition to the human diet. Previously, sugar was ingested in its natural form, through fruits and vegetables. Modern refined sugars like those listed above are now found in many foods, even those that aren't "sweet."
This change in diet has affected what we eat and the foods we crave. Sugar has been studied extensively and is a highly addictive substance. Indeed, one study performed on rats revealed that sugar was even more addictive than cocaine. The study indicates that sugar generates an abnormally intense reward signal in the brain and can even "override self-control mechanisms," leading to addiction.
Whether or not ADHD children are more prone to becoming addicted to sugar than their neurotypical peers is inconclusive. However, ADHD sugar intake should be monitored and minimized since it can worsen symptoms.
Best Diet Plan For ADHD Children
A balanced and nutritional diet is important for the brain to function optimally. It's even more important for children with ADHD, who are already being affected by brain dysregulation. The best diet plan for ADHD children will be similar to any other diet plan aiming for optimum health. We recommend something like the Mediterranean diet.
Because of how sugar affects ADHD children, it's best to avoid it as much as possible, especially in processed foods. Maintaining consistent blood sugar levels will help with stability and performance.
There should also be an emphasis on vitamins and minerals. Below are a few that are particularly useful in maintaining brain health.
For more detailed information on monitoring ADHD and blood sugar and what foods to eat and avoid with ADHD, check out the Drake Institute's full ADHD diet recommendations.
How The Drake Institute Treats ADHD
Understanding how sugar affects ADHD is a good step in helping any child improve their ADHD symptoms. To optimize ADHD treatment improvement, we recommend eating a healthy diet with as little sugar as possible to support our customized brain map-guided neurofeedback treatment protocols. At the Drake Institute, these protocols are derived from the patient’s abnormal brain wave patterns linked to their symptoms.
Brain Mapping
Brain mapping, also called qEEG brain mapping, is the first step in our treatment for brain-based disorders like ADHD and Autism Spectrum Disorders. The patient's brainwave activity is measured and recorded using specialized sensors and advanced technology. The data is then sent through an FDA-registered normative database of asymptomatic, same-age individuals.
The comparison to more "typical" brainwave patterns helps our Medical Director identify areas of the brain that may be experiencing dysregulation contributing to symptoms. Once the areas have been identified, a treatment protocol is designed for the patient.
Neurofeedback
After the brain has been mapped out, the patient undergoes neurofeedback treatment/training. During this stage of treatment, the sensors are again placed on the patient's scalp, where they record the brainwave activity.
This recorded brainwave activity is displayed on a screen in a form that is easy to understand, like a video game. During treatment sessions, the patient "plays" the video game by guiding their own brainwaves towards healthier functional patterns.
Neurofeedback treatment at the Drake Institute is drug-free and non-invasive, making it a safe and effective treatment choice for adults, teenagers, and children with ADHD.
Contact The Drake Institute Today!
If your child or an adult family member has been diagnosed with ADHD or has been displaying ADHD symptoms, find out how the Drake Institute can help. Just call us at 800-700-4233 or fill out the free consultation form.
Free Consultation - Telehealth Consultation Available
If you or a family member need help, please fill out our confidential online form
First Name:
Last Name:
Phone:
Email:
Best Time to Reach You:
How did you hear about us?
Accelerated Treatment Program
6 weeks for ADHD or ADD
8 weeks for Autism Spectrum Disorders
Testimonials
Lifestyle Magazine
Interview with Dr. David Velkoff
CNN
Seth's Story
National Geographic
Interview with Dr. David Velkoff
Univision
Spanish News Feature
Media Recognition
Contact Us Today
To get the help you or a loved one needs, call now to schedule your no-cost screening consultation.
3/31/2017
"They started biofeedback right away to produce more alpha brain waves. I went daily for 4 weeks I believe? It was relaxing. My brain learned what to do. It CURED me."
yelp
9/27/2015
"The treatment involves doing different protocols, you start to notice a difference after a couple weeks. The Drake Institute, and their treatment method is the way of the future! It absolutely was an answer to our prayers."
yelp
9/10/2015
"I was hesitant to go to The Drake Institute but was at a loss on how to help my son. I can not thank Drake and Maria enough I don't think we would be where we are today without it."
yelp
“David F. Velkoff, M.D., our Medical Director and co-founder, supervises all evaluation procedures and treatment programs. He is recognized as a physician pioneer in using biofeedback, qEEG brain mapping, neurofeedback, and neuromodulation in the treatment of ADHD, Autism Spectrum Disorders, and stress related illnesses including anxiety, depression, insomnia, and high blood pressure.
Dr. David Velkoff earned his Master’s degree in Psychology from the California State University at Los Angeles in 1975, and his Doctor of Medicine degree from Emory University School of Medicine in Atlanta in 1976. This was followed by Dr. Velkoff completing his internship in Obstetrics and Gynecology with an elective in Neurology at the University of California Medical Center in Irvine. He then shifted his specialty to Neurophysical Medicine and received his initial training in biofeedback/neurofeedback in Neurophysical Medicine from the leading doctors in the world in biofeedback at the renown Menninger Clinic in Topeka, Kansas. In 1980, he co-founded the Drake Institute of Neurophysical Medicine.
Seeking to better understand the link between illness and the mind, Dr. Velkoff served as the clinical director of an international research study on psychoneuroimmunology with the UCLA School of Medicine, Department of Microbiology and Immunology, and the Pasteur Institute in Paris. This was a follow-up study to an earlier clinical collaborative effort with UCLA School of Medicine demonstrating how the Drake Institute's stress treatment resulted in improved immune functioning of natural killer cell activity.
Dr. Velkoff served as one of the founding associate editors of the scientific publication, Journal of Neurotherapy. He has been an invited guest lecturer at Los Angeles Children's Hospital, UCLA, Cedars Sinai Medical Center-Thalians Mental Health Center, St. John's Hospital in Santa Monica,
California, and CHADD. He has been a medical consultant in Neurophysical Medicine to CNN, National Geographic Channel, Discovery Channel, Univision, and PBS.” | This area is responsible for attention and focus.
It may seem like a child with ADHD is simply acting out or intentionally choosing not to pay attention. Indeed, these children may be seen as lazy, unmotivated, or ill-tempered. In reality, their brain simply isn't allowing them to meet age-appropriate demands due to dysregulation in the brain. Effort, discipline, and parental strictness aren't enough to normalize that dysregulation.
How Does Sugar Affect Children With ADHD?
Sugar and ADHD are a poor combination. Sugar causes a release of dopamine in the brain similar to stimulant drugs. Even children without ADHD can become fidgety and inattentive after ingesting sugar. For children with ADHD, sugar can further disrupt an already dysregulated brain.
Sugar can also have a negative impact on the gut. Children with ADHD may already have gastrointestinal symptoms, and sugar could make them worse. Essentially, sugar affects ADHD by increasing brain dysregulation which exacerbates symptoms. It will also cause a spike in blood sugar, which then results in a spike in insulin and soon thereafter you can experience hypoglycemia, which makes inattention worse. For children with ADHD, a sugar crash can be particularly disruptive when they’re in the classroom. So if a child ingests a sugary breakfast then they may crash in the classroom mid-morning.
Sugar does indeed affect children with ADHD, but does sugar cause ADHD? There have been no studies to indicate that sugar consumption causes ADHD.
What About Artificial Sweeteners?
| yes |
Urology | Can testosterone increase the risk of prostate cancer? | yes_statement | "testosterone" can "increase" the "risk" of "prostate" "cancer".. higher levels of "testosterone" can lead to an "increased" "risk" of "prostate" "cancer". | https://www.ceu.ox.ac.uk/news/largest-study-to-date-confirms-role-of-two-hormones-in-aggressive-prostate-cancer-risk | Largest study to date confirms role of two hormones in aggressive ... | Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.
To date, the limited number of prostate cancer cases within cohort studies meant it was not possible to assess how IGF-1 and free testosterone affect the risk of different types of prostate cancer, particularly aggressive forms of the disease.
Furthermore, it was not clear whether these hormones directly increase prostate cancer risk, or if they are merely linked to a different factor which is the true cause. It was also possible that these associations were the result of reverse causation, where preclinical cancer symptoms caused the hormone levels to change before the disease was diagnosed.
To clarify these unknowns, CEU researchers led the largest study to date, using data from an international consortium: the Endogenous Hormones, Nutritional Biomarkers and Prostate Cancer Collaborative Group. This worldwide database collates information from all prospective studies of hormonal factors and prostate cancer risk, and contains over 17,000 prostate cancer cases with measured hormone levels (including 2,300 aggressive cases) and 37,000 controls. They also obtained genetic data from the PRACTICAL consortium, which contains over 79,000 prostate cancer cases and 60,000 controls.
The researchers investigated the association between blood levels of IGF-1 and free testosterone and the risk of overall, aggressive and early-onset prostate cancer. In addition, they performed a genetic approach known as Mendelian randomisation (MR). This used genetic variants that have previously been associated with levels of IGF-1 and free testosterone to investigate whether those with higher genetically predicted hormone concentrations have an increased risk of prostate cancer. Because these genetic variants are randomly allocated and fixed before birth, MR studies are less likely to be affected by confounding factors or reverse causation than studies which directly measure hormone levels.
Key findings
In the blood-based analysis, levels of IGF-1 were positively associated with a greater risk of overall and aggressive prostate cancer. For each standard deviation increase, the risk rose by 9% for each.
This was confirmed in the MR analysis: higher genetically predicted levels of IGF-1 were associated with a greater risk of overall, aggressive and early-onset prostate cancer. For each standard deviation increase in genetically predicted IGF-1, the risk increased by 7%, 10% and 13 % respectively.
In the blood-based analysis, levels of free testosterone were positively associated with a greater risk of overall prostate cancer. For each standard deviation increase, the risk increased by 3%.
In the MR study, higher genetically predicted levels of free testosterone were associated with a greater risk of overall, aggressive and early-onset prostate cancer. For each standard deviation increase in free testosterone, the risk increased by 20%, 23% and 37% respectively.
According to the researchers, the results suggest that reducing blood levels of both IGF-1 and free testosterone through lifestyle or drug interventions may be a strategy to decrease prostate cancer risk, although this needs further research.
Dr Eleanor Watts (formerly CEU, now at the National Cancer Institute), lead author for both studies, said: ‘This is the first analysis that has applied both blood-based and genetic approaches to investigate the association of these hormones with prostate cancer risk, using data from two large international consortia that represent almost all the available data worldwide. For the first time we show evidence that both IGF-I and free testosterone are important for aggressive, clinically relevant disease. These findings support the need for more research on the modifiable determinants of these hormones, and on whether interventions to lower levels of these hormones might reduce the risk of prostate cancer.’ | This worldwide database collates information from all prospective studies of hormonal factors and prostate cancer risk, and contains over 17,000 prostate cancer cases with measured hormone levels (including 2,300 aggressive cases) and 37,000 controls. They also obtained genetic data from the PRACTICAL consortium, which contains over 79,000 prostate cancer cases and 60,000 controls.
The researchers investigated the association between blood levels of IGF-1 and free testosterone and the risk of overall, aggressive and early-onset prostate cancer. In addition, they performed a genetic approach known as Mendelian randomisation (MR). This used genetic variants that have previously been associated with levels of IGF-1 and free testosterone to investigate whether those with higher genetically predicted hormone concentrations have an increased risk of prostate cancer. Because these genetic variants are randomly allocated and fixed before birth, MR studies are less likely to be affected by confounding factors or reverse causation than studies which directly measure hormone levels.
Key findings
In the blood-based analysis, levels of IGF-1 were positively associated with a greater risk of overall and aggressive prostate cancer. For each standard deviation increase, the risk rose by 9% for each.
This was confirmed in the MR analysis: higher genetically predicted levels of IGF-1 were associated with a greater risk of overall, aggressive and early-onset prostate cancer. For each standard deviation increase in genetically predicted IGF-1, the risk increased by 7%, 10% and 13 % respectively.
In the blood-based analysis, levels of free testosterone were positively associated with a greater risk of overall prostate cancer. For each standard deviation increase, the risk increased by 3%.
In the MR study, higher genetically predicted levels of free testosterone were associated with a greater risk of overall, aggressive and early-onset prostate cancer. For each standard deviation increase in free testosterone, the risk increased by 20%, | yes |
Urology | Can testosterone increase the risk of prostate cancer? | yes_statement | "testosterone" can "increase" the "risk" of "prostate" "cancer".. higher levels of "testosterone" can lead to an "increased" "risk" of "prostate" "cancer". | https://www.ndph.ox.ac.uk/news/high-levels-of-two-hormones-in-the-blood-raise-prostate-cancer-risk | High levels of two hormones in the blood raise prostate cancer risk ... | Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.
High levels of two hormones in the blood raise prostate cancer risk
Men with higher levels of ‘free’ testosterone and a growth hormone in their blood are more likely to be diagnosed with prostate cancer, according to research presented at the 2019 NCRI Cancer Conference.
Factors such as older age, ethnicity and a family history of the disease are already known to increase a man’s risk of developing prostate cancer. However, the new study of more than 200,000 men is one of the first to show strong evidence of two factors that could possibly be modified to reduce prostate cancer risk.
The research was led by Ruth Travis, and Ellie Watts. Dr Travis said: “Prostate cancer is the second most commonly diagnosed cancer in men worldwide after lung cancer and a leading cause of cancer death but there is no evidence-based advice that we can give to men to reduce their risk. We were interested in studying the levels of two hormones circulating in the blood because previous research suggests they could be linked with prostate cancer and because these are factors that could potentially be altered in an attempt to reduce prostate cancer risk.”
The researchers studied 200,452 men who are part of the UK Biobank project. All were free of cancer when they joined the study and were not taking any hormone therapy.
The men gave blood samples that were tested for their levels of testosterone and the growth hormone insulin-like growth factor-I (IGF-I). The researchers calculated levels of free testosterone – testosterone that is circulating in the blood and not bound to any other molecule and can therefore have an effect in the body. A subset of 9,000 of men later gave a second blood sample, to help the researchers account for natural fluctuations in hormone levels.
The men were followed for an average of six to seven years to see if they went on to develop prostate cancer. Within the group, there were 5,412 cases and 296 deaths from the disease.
The researchers found that men with higher concentrations of the two hormones in their blood were more likely to be diagnosed with prostate cancer. For every increase of five nanomoles in the concentration of IGF-I per litre of blood (5 nmol/L), men were 9% more likely to develop prostate cancer. For every increase of 50 picomoles of free testosterone per litre of blood (50 pmol/L), there was a 10% increase in prostate cancer risk.
In the population as a whole, the findings correspond to a 25% greater risk in men who have the highest levels of IGF-I, compared to those with the lowest. Men with the highest free testosterone levels face an 18% greater risk of prostate cancer, compared to those with the lowest levels.
The researchers say that because the blood tests were taken some years before the prostate cancer developed, it is likely that the hormone levels are leading to the increased risk of prostate cancer, as opposed to the cancers leading to higher levels of the hormones. Thanks to the large size of the study, the researchers were also able to take account of other factors that can influence cancer risk, including body size, socioeconomic status and diabetes.
Dr Travis said: “This type of study can’t tell us why these factors are linked, but we know that testosterone plays a role in the normal growth and function of the prostate and that IGF-I has a role in stimulating the growth of cells in our bodies.”
“What this research does tell us is that these two hormones could be a mechanism that links things like diet, lifestyle and body size with the risk of prostate cancer. This takes us a step closer to strategies for preventing the disease.” | Dr Travis said: “Prostate cancer is the second most commonly diagnosed cancer in men worldwide after lung cancer and a leading cause of cancer death but there is no evidence-based advice that we can give to men to reduce their risk. We were interested in studying the levels of two hormones circulating in the blood because previous research suggests they could be linked with prostate cancer and because these are factors that could potentially be altered in an attempt to reduce prostate cancer risk.”
The researchers studied 200,452 men who are part of the UK Biobank project. All were free of cancer when they joined the study and were not taking any hormone therapy.
The men gave blood samples that were tested for their levels of testosterone and the growth hormone insulin-like growth factor-I (IGF-I). The researchers calculated levels of free testosterone – testosterone that is circulating in the blood and not bound to any other molecule and can therefore have an effect in the body. A subset of 9,000 of men later gave a second blood sample, to help the researchers account for natural fluctuations in hormone levels.
The men were followed for an average of six to seven years to see if they went on to develop prostate cancer. Within the group, there were 5,412 cases and 296 deaths from the disease.
The researchers found that men with higher concentrations of the two hormones in their blood were more likely to be diagnosed with prostate cancer. For every increase of five nanomoles in the concentration of IGF-I per litre of blood (5 nmol/L), men were 9% more likely to develop prostate cancer. For every increase of 50 picomoles of free testosterone per litre of blood (50 pmol/L), there was a 10% increase in prostate cancer risk.
In the population as a whole, the findings correspond to a 25% greater risk in men who have the highest levels of IGF-I, compared to those with the lowest. Men with the highest free testosterone levels face an 18% greater risk of prostate cancer, compared to those with the lowest levels.
| yes |
Urology | Can testosterone increase the risk of prostate cancer? | yes_statement | "testosterone" can "increase" the "risk" of "prostate" "cancer".. higher levels of "testosterone" can lead to an "increased" "risk" of "prostate" "cancer". | https://prostatecanceruk.org/about-us/news-and-views/2017/11/testosterone-and-prostate-cancer-risk-the-plot-thickens/ | Testosterone and prostate cancer risk: the plot thickens | Prostate ... | New research presented this weekend at the National Cancer Research Institute (NCRI) Cancer Conference in Liverpool has concluded that men with naturally low levels of the male sex hormone testosterone are less likely to develop prostate cancer than those with higher blood levels of the hormone.
This research, carried out by scientists at the University of Oxford, looked at blood samples from around 19,000 men aged between 34 and 76, collected between 1959 and 2004. 6,900 of these men went on to develop prostate cancer. The scientists divided the men into 10 groups, depending on the level of testosterone in their blood, and compared this to prostate cancer risk.
What’s interesting about this research is that while low levels of testosterone were associated with decreased risk of developing prostate cancer, high testosterone levels were not associated with increased risk. This supports the theory that there are only so many androgen receptors (the proteins that bind testosterone to activate it, so that it can do its job) in the body. So once these are all ‘full up’ with testosterone, it doesn’t matter how much more testosterone is circulating in the blood, because it can’t bind to and activate a receptor. This would explain why high levels of testosterone don’t increase risk of developing prostate cancer, but low levels can lower it.
Testosterone levels alone may not hold the key
However, while this research gives some interesting clues about factors involved in causing prostate cancer in the first place – which will undoubtedly prove useful in working out how to one day prevent the disease from occurring – it also raised more difficult questions.
That’s because although men with lower levels of testosterone were less likely to develop prostate cancer, once they did, it was more likely to be an aggressive form of the disease. So far, we don’t have any answers as to why this might be, but it adds yet another layer of complexity to the mystery of prostate cancer development, and opens another avenue of investigation to the scientists set on unravelling these sorts of clues. It also suggests that testosterone levels alone will not hold the key to the causes of prostate cancer development, and that the link between male sex hormones and cancer development may well be more complicated than we previously imagined.
Dr Matthew Hobbs, Deputy Director of Research at Prostate Cancer UK said: “This research gives us some important clues about the role that testosterone might play in triggering prostate cancer. It’s particularly interesting that men in this study with the lowest levels of the hormone were less likely to get prostate cancer, but if they were diagnosed, it was more likely to be aggressive. This is clearly a complex effect and more research is needed to understand it.
“We still know too little about what causes prostate cancer cells to develop. We urgently need this knowledge to understand how we might prevent the disease in the future, which is why this is a key research priority for Prostate Cancer UK. Until we know more about the underlying causes of prostate cancer, it’s important that all men – and particularly black men, men with family history and men over 50 – are aware of their risk of prostate cancer and go to the GP if they have any concerns.” | New research presented this weekend at the National Cancer Research Institute (NCRI) Cancer Conference in Liverpool has concluded that men with naturally low levels of the male sex hormone testosterone are less likely to develop prostate cancer than those with higher blood levels of the hormone.
This research, carried out by scientists at the University of Oxford, looked at blood samples from around 19,000 men aged between 34 and 76, collected between 1959 and 2004. 6,900 of these men went on to develop prostate cancer. The scientists divided the men into 10 groups, depending on the level of testosterone in their blood, and compared this to prostate cancer risk.
What’s interesting about this research is that while low levels of testosterone were associated with decreased risk of developing prostate cancer, high testosterone levels were not associated with increased risk. This supports the theory that there are only so many androgen receptors (the proteins that bind testosterone to activate it, so that it can do its job) in the body. So once these are all ‘full up’ with testosterone, it doesn’t matter how much more testosterone is circulating in the blood, because it can’t bind to and activate a receptor. This would explain why high levels of testosterone don’t increase risk of developing prostate cancer, but low levels can lower it.
Testosterone levels alone may not hold the key
However, while this research gives some interesting clues about factors involved in causing prostate cancer in the first place – which will undoubtedly prove useful in working out how to one day prevent the disease from occurring – it also raised more difficult questions.
That’s because although men with lower levels of testosterone were less likely to develop prostate cancer, once they did, it was more likely to be an aggressive form of the disease. So far, we don’t have any answers as to why this might be, but it adds yet another layer of complexity to the mystery of prostate cancer development, and opens another avenue of investigation to the scientists set on unravelling these sorts of clues. It also suggests that testosterone levels alone will not hold the key to the causes of prostate cancer development, and | no |
Urology | Can testosterone increase the risk of prostate cancer? | yes_statement | "testosterone" can "increase" the "risk" of "prostate" "cancer".. higher levels of "testosterone" can lead to an "increased" "risk" of "prostate" "cancer". | https://www.moffitt.org/cancers/prostate-cancer/faqs/can-testosterone-replacement-therapy-increase-the-risk-of-prostate-cancer/ | Can Testosterone Replacement Therapy Increase the Risk of ... | Can Testosterone Replacement Therapy Increase the Risk of Prostate Cancer?
Can Testosterone Replacement Therapy Increase the Risk of Prostate Cancer?
Many men who receive hormone replacement therapy wonder if this treatment could increase their risk of developing cancer. More specifically, since prostate cancer is so common, some men may question whether there is any link between testosterone replacement therapy and prostate cancer. According to the results of a recent study, testosterone treatment does not increase a man’s risk of developing prostate cancer. In fact, new research suggests that this treatment may actually reduce the risk of aggressive prostate cancer.
What are the risks of testosterone replacement therapy?
While there is currently no evidence to suggest the existence of any link between testosterone replacement therapy and prostate cancer, the use of this treatment is not completely without risk. For instance, some men may experience immediate side effects, such as breathing disturbances during sleep, breast swelling or tenderness, ankle swelling and acne. Many physicians also monitor their testosterone replacement patients for high red blood cell counts, which can increase the risk of blood clots. Additionally, long-term testosterone replacement therapy is associated with an increased risk of cardiovascular problems, including heart attacks and strokes, particularly in older men.
For men who have low blood testosterone levels, the benefits of hormone replacement therapy generally outweigh the potential risks. However, most other men who are considering testosterone replacement therapy should proceed cautiously. It’s always best to consult with an experienced physician who can provide individualized advice after carefully weighing the risks and benefits. Sometimes, the issues sought to be addressed with hormone therapy, such as fatigue and low sex drive, can be targeted in other ways. For instance, it may be appropriate to first identify and address any nutritional, exercise or sleep deficiencies before considering hormone replacement therapy.
If you would like to discuss your prostate cancer risk with an oncologist in the Urologic Oncology Program at Moffitt Cancer Center, you can request an appointment by calling 1-888-663-3488 or completing our new patient registration form online. We do not require referrals. | Can Testosterone Replacement Therapy Increase the Risk of Prostate Cancer?
Can Testosterone Replacement Therapy Increase the Risk of Prostate Cancer?
Many men who receive hormone replacement therapy wonder if this treatment could increase their risk of developing cancer. More specifically, since prostate cancer is so common, some men may question whether there is any link between testosterone replacement therapy and prostate cancer. According to the results of a recent study, testosterone treatment does not increase a man’s risk of developing prostate cancer. In fact, new research suggests that this treatment may actually reduce the risk of aggressive prostate cancer.
What are the risks of testosterone replacement therapy?
While there is currently no evidence to suggest the existence of any link between testosterone replacement therapy and prostate cancer, the use of this treatment is not completely without risk. For instance, some men may experience immediate side effects, such as breathing disturbances during sleep, breast swelling or tenderness, ankle swelling and acne. Many physicians also monitor their testosterone replacement patients for high red blood cell counts, which can increase the risk of blood clots. Additionally, long-term testosterone replacement therapy is associated with an increased risk of cardiovascular problems, including heart attacks and strokes, particularly in older men.
For men who have low blood testosterone levels, the benefits of hormone replacement therapy generally outweigh the potential risks. However, most other men who are considering testosterone replacement therapy should proceed cautiously. It’s always best to consult with an experienced physician who can provide individualized advice after carefully weighing the risks and benefits. Sometimes, the issues sought to be addressed with hormone therapy, such as fatigue and low sex drive, can be targeted in other ways. For instance, it may be appropriate to first identify and address any nutritional, exercise or sleep deficiencies before considering hormone replacement therapy.
| no |
Urology | Can testosterone increase the risk of prostate cancer? | yes_statement | "testosterone" can "increase" the "risk" of "prostate" "cancer".. higher levels of "testosterone" can lead to an "increased" "risk" of "prostate" "cancer". | https://www.sciencedaily.com/releases/2019/10/191031204628.htm | High levels of two hormones in the blood raise prostate cancer risk ... | High levels of two hormones in the blood raise prostate cancer risk
Men with higher levels of 'free' testosterone and a growth hormone in their blood are more likely to be diagnosed with prostate cancer, according to research presented at the 2019 NCRI Cancer Conference.
Men with higher levels of 'free' testosterone and a growth hormone in their blood are more likely to be diagnosed with prostate cancer, according to research presented at the 2019 NCRI Cancer Conference.
Other factors such as older age, ethnicity and a family history of the disease are already known to increase a man's risk of developing prostate cancer.
However, the new study of more than 200,000 men is one of the first to show strong evidence of two factors that could possibly be modified to reduce prostate cancer risk.
The research was led by Dr Ruth Travis, an Associate Professor, and Ellie Watts, a Research Fellow, both based at the Nuffield Department of Population Health, University of Oxford, UK. Dr Travis said: "Prostate cancer is the second most commonly diagnosed cancer in men worldwide after lung cancer and a leading cause of cancer death. But there is no evidence-based advice that we can give to men to reduce their risk.
"We were interested in studying the levels of two hormones circulating in the blood because previous research suggests they could be linked with prostate cancer and because these are factors that could potentially be altered in an attempt to reduce prostate cancer risk."
The researchers studied 200,452 men who are part of the UK Biobank project. All were free of cancer when they joined the study and were not taking any hormone therapy.
The men gave blood samples that were tested for their levels of testosterone and a growth hormone called insulin-like growth factor-I (IGF-I). The researchers calculated levels of free testosterone -- testosterone that is circulating in the blood and not bound to any other molecule and can therefore have an effect in the body. A subset of 9,000 of men gave a second blood sample at a later date, to help the researchers account for natural fluctuations in hormone levels.
advertisement
The men were followed for an average of six to seven years to see if they went on to develop prostate cancer. Within the group, there were 5,412 cases and 296 deaths from the disease.
The researchers found that men with higher concentrations of the two hormones in their blood were more likely to be diagnosed with prostate cancer. For every increase of five nanomoles in the concentration of IGF-I per litre of blood (5 nmol/L), men were 9% more likely to develop prostate cancer. For every increase of 50 picomoles of 'free' testosterone per litre of blood (50 pmol/L), there was a 10% increase in prostate cancer risk.
Looking at the population as a whole, the researchers say their findings correspond to a 25% greater risk in men who have the highest levels of IGF-I, compared to those with the lowest. Men with the highest 'free' testosterone levels face a 18% greater risk of prostate cancer, compared to those with the lowest levels.
The researchers say that because the blood tests were taken some years before the prostate cancer developed, it is likely that the hormone levels are leading to the increased risk of prostate cancer, as opposed to the cancers leading to higher levels of the hormones. Thanks to the large size of the study, the researchers were also able to take account of other factors that can influence cancer risk, including body size, socioeconomic status and diabetes.
Dr Travis said: "This type of study can't tell us why these factors are linked, but we know that testosterone plays a role in the normal growth and function of the prostate and that IGF-I has a role in stimulating the growth of cells in our bodies."
"What this research does tell us is that these two hormones could be a mechanism that links things like diet, lifestyle and body size with the risk of prostate cancer. This takes us a step closer to strategies for preventing the disease."
Dr Travis and Ms Watts will continue examining the data from this study to confirm their findings. In the future, they also plan to home in on risk factors for the most aggressive types of prostate cancer.
Professor Hashim Ahmed, chair of NCRI's prostate group and Professor of Urology at Imperial College London, who was not involved in the research said: "These results are important because they show that there are at least some factors that influence prostate cancer risk that can potentially be altered. In the longer term, it could mean that we can give men better advice on how to take steps to reduce their own risk.
"This study also shows the importance of carrying out very large studies, which are only possible thanks to the thousands of men who agreed to take part."
National Cancer Research Institute. "High levels of two hormones in the blood raise prostate cancer risk." ScienceDaily. ScienceDaily, 31 October 2019. <www.sciencedaily.com/releases/2019/10/191031204628.htm>.
National Cancer Research Institute. (2019, October 31). High levels of two hormones in the blood raise prostate cancer risk. ScienceDaily. Retrieved August 15, 2023 from www.sciencedaily.com/releases/2019/10/191031204628.htm
National Cancer Research Institute. "High levels of two hormones in the blood raise prostate cancer risk." ScienceDaily. www.sciencedaily.com/releases/2019/10/191031204628.htm (accessed August 15, 2023).
Sep. 21, 2022 Researchers have made an important discovery about how prostate cancer may start to develop. A new study reveals that the prostate as a whole, including cells that appear normal, is different in men ...
Jan. 3, 2019 Genetic alterations in low-risk prostate cancer diagnosed by needle biopsy can identify men that harbor higher-risk cancer in their prostate glands, researchers have discovered. The research found ...
Dec. 7, 2018 Men with inflammatory bowel disease have four to five times higher risk of being diagnosed with prostate cancer. This is the first report to show these men have higher than average PSA values and a ... | High levels of two hormones in the blood raise prostate cancer risk
Men with higher levels of 'free' testosterone and a growth hormone in their blood are more likely to be diagnosed with prostate cancer, according to research presented at the 2019 NCRI Cancer Conference.
Men with higher levels of 'free' testosterone and a growth hormone in their blood are more likely to be diagnosed with prostate cancer, according to research presented at the 2019 NCRI Cancer Conference.
Other factors such as older age, ethnicity and a family history of the disease are already known to increase a man's risk of developing prostate cancer.
However, the new study of more than 200,000 men is one of the first to show strong evidence of two factors that could possibly be modified to reduce prostate cancer risk.
The research was led by Dr Ruth Travis, an Associate Professor, and Ellie Watts, a Research Fellow, both based at the Nuffield Department of Population Health, University of Oxford, UK. Dr Travis said: "Prostate cancer is the second most commonly diagnosed cancer in men worldwide after lung cancer and a leading cause of cancer death. But there is no evidence-based advice that we can give to men to reduce their risk.
"We were interested in studying the levels of two hormones circulating in the blood because previous research suggests they could be linked with prostate cancer and because these are factors that could potentially be altered in an attempt to reduce prostate cancer risk. "
The researchers studied 200,452 men who are part of the UK Biobank project. All were free of cancer when they joined the study and were not taking any hormone therapy.
The men gave blood samples that were tested for their levels of testosterone and a growth hormone called insulin-like growth factor-I (IGF-I). The researchers calculated levels of free testosterone -- testosterone that is circulating in the blood and not bound to any other molecule and can therefore have an effect in the body. A subset of 9,000 of men gave a second blood sample at a later date, to help the researchers account for natural fluctuations in hormone levels.
advertisement
| yes |
Urology | Can testosterone increase the risk of prostate cancer? | yes_statement | "testosterone" can "increase" the "risk" of "prostate" "cancer".. higher levels of "testosterone" can lead to an "increased" "risk" of "prostate" "cancer". | https://www.cancer.org/cancer/types/prostate-cancer/treating/hormone-therapy.html | Hormone Therapy for Prostate Cancer | American Cancer Society | Online Help
Our 24/7 cancer helpline provides support for people dealing with cancer. We can connect you with trained cancer information specialists who will answer questions about a cancer diagnosis and provide guidance and a compassionate ear.
At our National Cancer Information Center trained Cancer Information Specialists can answer questions 24 hours a day, every day of the year to empower you with accurate, up-to-date information to help you make educated health decisions. We connect patients, caregivers, and family members with valuable services and resources.
Or ask us how you can get involved and support the fight against cancer. Some of the topics we can assist with include:
Referrals to patient-related programs or resources
Donations, website, or event-related assistance
Tobacco-related topics
Volunteer opportunities
Cancer Information
For medical questions, we encourage you to review our information with your doctor.
Hormone Therapy for Prostate Cancer
Hormone therapy is also called androgen suppression therapy. The goal of this treatment is to reduce levels of male hormones, called androgens, in the body, or to stop them from fueling prostate cancer cell growth.
Androgens stimulate prostate cancer cells to grow. The main androgens in the body are testosterone and dihydrotestosterone (DHT). Most androgens are made by the testicles, but the adrenal glands (glands that sit above your kidneys) as well as the prostate cancer cells themselves, can also make androgens.
Lowering androgen levels or stopping them from getting into prostate cancer cells often makes prostate cancers shrink or grow more slowly for a time. But hormone therapy alone does not cure prostate cancer.
When is hormone therapy used?
Hormone therapy may be used:
If the cancer has spread too far to be cured by surgery or radiation, or if you can’t have these treatments for some other reason
If the cancer remains or comes back after treatment with surgery or radiation therapy
Along with radiation therapy as the initial treatment, if you are at higher risk of the cancer coming back after treatment (based on a high Gleason score, high PSA level, and/or growth of the cancer outside the prostate)
Before radiation to try to shrink the cancer to make treatment more effective
Types of hormone therapy
Several types of hormone therapy can be used to treat prostate cancer.
Treatment to lower testicular androgen levels
Androgen deprivation therapy, also called ADT, uses surgery or medicines to lower the levels of androgens made by the testicles.
Orchiectomy (surgical castration)
Even though this is a type of surgery, its main effect is as a form of hormone therapy. In this operation, the surgeon removes the testicles, where most of the androgens (such as testosterone and DHT) are made. This causes most prostate cancers to stop growing or shrink for a time.
This is done as an outpatient procedure. It is probably the least expensive and simplest form of hormone therapy. But unlike some of the other treatments, it is permanent, and many men have trouble accepting the removal of their testicles. Because of this, they may choose treatment with drugs that lower hormone levels (such as an LHRH agonist or antagonist) instead.
Some men having this surgery are concerned about how it will look afterward. If wanted, artificial testicles that look much like normal ones can be inserted into the scrotum.
LHRH agonists
Luteinizing hormone-releasing hormone (LHRH) agonists (also called LHRH analogs or GnRH agonists) are drugs that lower the amount of testosterone made by the testicles. Treatment with these drugs is sometimes called medical castration because they lower androgen levels just as well as orchiectomy.
With these drugs, the testicles stay in place, but they will shrink over time, and they may even become too small to feel.
LHRH agonists are injected or placed as small implants under the skin. Depending on the drug used, they are given anywhere from once a month up to once every 6 months. The LHRH agonists available in the United States include:
Leuprolide (Lupron, Eligard)
Goserelin (Zoladex)
Triptorelin (Trelstar)
Leuprolide mesylate (Camcevi)
When LHRH agonists are first given, testosterone levels go up briefly before falling to very low levels. This effect, called tumor flare, results from the complex way in which these drugs work. Men whose cancer has spread to the bones may have bone pain. Men whose prostate gland has not been removed may have trouble urinating. If the cancer has spread to the spine, even a short-term increase in tumor growth as a result of the flare could press on the spinal cord and cause pain or paralysis. A flare can be avoided by giving drugs called anti-androgens (discussed below) for a few weeks when starting treatment with LHRH agonists.
LHRH antagonists
LHRH antagonists can be used to treat advanced prostate cancer. These drugs work in a slightly different way from the LHRH agonists, but they lower testosterone levels more quickly and don’t cause tumor flare like the LHRH agonists do. Treatment with these drugs can also be considered a form of medical castration.
Degarelix (Firmagon) is given as a monthly injection under the skin. Some men may notice problems at the injection site (pain, redness, and swelling).
Relugolix (Orgovyx) is taken as pills, once a day, so it might allow for less frequent office visits.
Possible side effects
Orchiectomy and LHRH agonists and antagonists can all cause similar side effects from lower levels of hormones such as testosterone. These side effects can include:
Some research has suggested that the risk of high blood pressure, diabetes, strokes, heart attacks, and even death from heart disease is higher in men treated with hormone therapy, although not all studies have found this.
Many side effects of hormone therapy can be prevented or treated. For example:
Hot flashes can often be helped by treatment with certain antidepressants or other drugs.
Brief radiation treatment to the breasts can help prevent their enlargement, but this is not effective once breast enlargement has occurred.
Several drugs can help prevent and treat osteoporosis.
Depression can be treated with antidepressants and/or counseling.
Exercise can help reduce many side effects, including fatigue, weight gain, and the loss of bone and muscle mass.
There is growing concern that hormone therapy for prostate cancer may lead to problems thinking, concentrating, and/or with memory, but this has not been studied thoroughly. Still, hormone therapy does seem to lead to memory problems in some men. These problems are rarely severe, and most often affect only some types of memory. More studies are being done to look at this issue.
Treatment to lower androgen levels from other parts of the body
LHRH agonists and antagonists can stop the testicles from making androgens, but cells in other parts of the body, such as the adrenal glands, and prostate cancer cells themselves, can still make male hormones, which can fuel cancer growth. Some drugs can block the formation of androgens made by these cells.
Abiraterone (Zytiga) blocks an enzyme (protein) called CYP17, which helps stop these cells from making androgens.
Abiraterone can be used in men with advanced prostate cancer that is either:
High risk (cancer with a high Gleason score, spread to several spots in the bones, or spread to other organs)
Castration-resistant (cancer that is still growing despite low testosterone levels from an LHRH agonist, LHRH antagonist, or orchiectomy)
This drug is taken as pills every day. It doesn’t stop the testicles from making testosterone, so men who haven’t had an orchiectomy need to continue treatment with an LHRH agonist or antagonist. Because abiraterone also lowers the level of some other hormones in the body, prednisone (a corticosteroid drug) needs to be taken during treatment as well to avoid certain side effects.
Ketoconazole (Nizoral), first used for treating fungal infections, also blocks production of androgens made in the adrenal glands, much like abiraterone. It's most often used to treat men just diagnosed with advanced prostate cancer who have a lot of cancer in the body, as it offers a quick way to lower testosterone levels. It can also be tried if other forms of hormone therapy are no longer working.
Ketoconazole also can block the production of cortisol, an important steroid hormone in the body, so men treated with this drug often need to take a corticosteroid (such as prednisone or hydrocortisone).
Drugs that stop androgens from working
Anti-androgens
For most prostate cancer cells to grow, androgens have to attach to a protein in the prostate cancer cell called an androgen receptor. Anti-androgens are drugs that also connect to these receptors, keeping the androgens from causing tumor growth. Anti-androgens are also sometimes called androgen receptor antagonists.
Drugs of this type include:
Flutamide (Eulexin)
Bicalutamide (Casodex)
Nilutamide (Nilandron)
They are taken daily as pills.
In the United States, anti-androgens are not often used by themselves:
An anti-androgen may be added to treatment if orchiectomy or an LHRH agonist or antagonist is no longer working by itself.
An anti-androgen is also sometimes given for a few weeks when an LHRH agonist is first started. This can help prevent a tumor flare.
An anti-androgen can also be combined with orchiectomy or an LHRH agonist as first-line hormone therapy. This is called combined androgen blockade (CAB). There is still some debate as to whether CAB is more effective in this setting than using orchiectomy or an LHRH agonist alone. If there is a benefit, it appears to be small.
In some men, if an anti-androgen is no longer working, simply stopping the anti-androgen can cause the cancer to stop growing for a short time. This is called the anti-androgen withdrawal effect, although it is not clear why it happens.
Possible side effects: Anti-androgens have similar side effects to LHRH agonists, LHRH antagonists, and orchiectomy, but they may have fewer sexual side effects. When these drugs are used alone, sexual desire and erections can often be maintained. When these drugs are given to men already being treated with LHRH agonists, diarrhea is the major side effect. Nausea, liver problems, and tiredness can also occur.
Newer anti-androgens
Enzalutamide (Xtandi), apalutamide (Erleada) and darolutamide (Nubeqa) are newer types of anti-androgens. They can sometimes be helpful even when older anti-androgens are not.
All of these drugs can be helpful in men with cancer that has not spread but is no longer responding to other forms of hormone therapy (known as non-metastatic castration-resistant prostate cancer (CRPC), described below).
Enzalutamide can also be used for metastatic prostate cancer (cancer that has spread), whether it is castration-resistant or castration-sensitive (still responding to other forms of hormone therapy).
Apalutamide and darolutamide can also be used for metastatic castration-sensitive prostate cancer (CSPC), also known as hormone-sensitive prostate cancer (HSPC), described below.
These drugs are taken as pills each day.
Side effects can include diarrhea, fatigue, rash, and worsening of hot flashes. These drugs can also cause some nervous system side effects, including dizziness and, rarely, seizures. Men taking one of these drugs are more likely to fall, which may lead to injuries. Some men have also had heart problems when taking these newer types of anti-androgens.
Other androgen-suppressing drugs
Estrogens (female hormones) were once the main alternative to removing the testicles (orchiectomy) for men with advanced prostate cancer. Because of their possible side effects (including blood clots and breast enlargement), estrogens have been replaced by other types of hormone therapy. Still, estrogens may be tried if other hormone treatments are no longer working.
Current issues in hormone therapy
There are many issues around hormone therapy that not all doctors agree on, such as the best time to start and stop it and the best way to give it. Studies are now looking at these issues. A few of them are discussed here.
Treating early-stage cancer
Some doctors have used hormone therapy instead of observation or active surveillance in men with early-stage prostate cancer who do not want surgery or radiation. Studies have not found that these men live any longer than those who don’t get any treatment until the cancer progresses or symptoms develop. Because of this, hormone treatment is not usually advised for early-stage prostate cancer.
Early versus delayed treatment
For men who need (or will eventually need) hormone therapy, such as men whose PSA levels are rising after surgery or radiation or men with advanced prostate cancer who don’t yet have symptoms, it’s not always clear when it is best to start hormone treatment. Some doctors think that hormone therapy works better if it’s started as soon as possible, even if a man feels well and is not having any symptoms. Some studies have shown that hormone treatment may slow the disease down and perhaps even help men live longer.
But not all doctors agree with this approach. Some are waiting for more evidence of benefit. They feel that because of the side effects of hormone therapy and the chance that the cancer could become resistant to therapy sooner, treatment shouldn’t be started until a man has symptoms from the cancer. This issue is being studied.
Intermittent versus continuous hormone therapy
Most prostate cancers treated with hormone therapy become resistant to this treatment over a period of months or years. Some doctors believe that constant androgen suppression might not be needed, so they advise intermittent (on-again, off-again) treatment. This can allow for a break from side effects like decreased energy, sexual problems, and hot flashes.
In one form of intermittent hormone therapy, treatment is stopped once the PSA drops to a very low level. If the PSA level begins to rise, the drugs are started again. Another form of intermittent therapy uses hormone therapy for fixed periods of time – for example, 6 months on followed by 6 months off.
At this time, it isn’t clear how this approach compares to continuous hormone therapy. Some studies have found that continuous therapy might help men live longer, but other studies have not found such a difference.
Combined androgen blockade (CAB)
Some doctors treat patients with androgen deprivation (orchiectomy or an LHRH agonist or antagonist) plus an anti-androgen. Some studies have suggested this may be more helpful than androgen deprivation alone, but others have not. Most doctors are not convinced there’s enough evidence that this combined therapy is better than starting with one drug alone when treating prostate cancer that has spread to other parts of the body.
Triple androgen blockade (TAB)
Some doctors have suggested taking combined therapy one step further, by adding a drug called a 5-alpha reductase inhibitor– either finasteride (Proscar) or dutasteride (Avodart) – to the combined androgen blockade. There is very little evidence to support the use of this triple androgen blockade at this time.
These terms are sometimes used to describe how well a man's prostate cancer is responding to hormone therapy.
Castration-sensitive prostate cancer (CSPC), also known as hormone-sensitive prostate cancer (HSPC),means the cancer is being controlled by keeping the testosterone level as low as what would be expected if the testicles were removed by castration. Levels can be kept this low with an orchiectomy, or by taking an LHRH agonist or an LHRH antagonist.
Castration-resistant prostate cancer (CRPC) means the cancer is still growing even when the testosterone levels are at or below the level that would be expected with castration. Some of these cancers might still be helped by other forms of hormone therapy, such as abiraterone or one of the newer anti-androgens.
Hormone-refractory prostate cancer (HRPC) refers to prostate cancer that is no longer helped by any type of hormone therapy, including the newer medicines.
More information about hormone therapy
To learn more about how hormone therapy is used to treat cancer, see Hormone Therapy. | Zoladex)
Triptorelin (Trelstar)
Leuprolide mesylate (Camcevi)
When LHRH agonists are first given, testosterone levels go up briefly before falling to very low levels. This effect, called tumor flare, results from the complex way in which these drugs work. Men whose cancer has spread to the bones may have bone pain. Men whose prostate gland has not been removed may have trouble urinating. If the cancer has spread to the spine, even a short-term increase in tumor growth as a result of the flare could press on the spinal cord and cause pain or paralysis. A flare can be avoided by giving drugs called anti-androgens (discussed below) for a few weeks when starting treatment with LHRH agonists.
LHRH antagonists
LHRH antagonists can be used to treat advanced prostate cancer. These drugs work in a slightly different way from the LHRH agonists, but they lower testosterone levels more quickly and don’t cause tumor flare like the LHRH agonists do. Treatment with these drugs can also be considered a form of medical castration.
Degarelix (Firmagon) is given as a monthly injection under the skin. Some men may notice problems at the injection site (pain, redness, and swelling).
Relugolix (Orgovyx) is taken as pills, once a day, so it might allow for less frequent office visits.
Possible side effects
Orchiectomy and LHRH agonists and antagonists can all cause similar side effects from lower levels of hormones such as testosterone. These side effects can include:
Some research has suggested that the risk of high blood pressure, diabetes, strokes, heart attacks, and even death from heart disease is higher in men treated with hormone therapy, although not all studies have found this.
Many side effects of hormone therapy can be prevented or treated. For example:
Hot flashes can often be helped by treatment with certain antidepressants or other drugs.
| yes |
Urology | Can testosterone increase the risk of prostate cancer? | yes_statement | "testosterone" can "increase" the "risk" of "prostate" "cancer".. higher levels of "testosterone" can lead to an "increased" "risk" of "prostate" "cancer". | https://www.mayoclinic.org/tests-procedures/hormone-therapy-for-prostate-cancer/about/pac-20384737 | Hormone therapy for prostate cancer - Mayo Clinic | Overview
Prostate cancer
Prostate cancer
Prostate cancer occurs in the prostate gland. The gland sits just below the bladder in males. It surrounds the top part of the tube that drains urine from the bladder, called the urethra. This illustration shows a healthy prostate gland and a prostate gland with cancer.
Hormone therapy for prostate cancer is a treatment that stops the hormone testosterone either from being made or from reaching prostate cancer cells.
Most prostate cancer cells rely on testosterone to grow. Hormone therapy causes prostate cancer cells to die or to grow more slowly.
Hormone therapy for prostate cancer may involve medicines or possibly surgery to remove the testicles.
Hormone therapy for prostate cancer also is known as androgen deprivation therapy.
Risks
Not being able to get or keep an erection, called erectile dysfunction.
Bone thinning, which can lead to broken bones.
Hot flashes.
Less body hair, smaller genitals and growth of breast tissue.
Tiredness.
Diabetes.
Heart disease.
Intermittent dosing
In certain situations, doctors may recommend taking hormone therapy medicines for a set amount of time or until the PSA level is very low. Then the medicine is stopped. For some people, this approach can help reduce the side effects of hormone therapy. If the prostate cancer comes back or gets worse, it might be necessary to start the medicines again.
Early research shows that starting and stopping hormone therapy medicines, sometimes called intermittent dosing, may lower the risk of side effects without affecting long-term survival. And this dosing approach might improve quality of life.
How you prepare
Medicines that stop the testicles from making testosterone. Certain medicines stop cells from getting the signals that tell them to make testosterone. These medicines are called luteinizing hormone-releasing hormone (LHRH) agonists and antagonists. Another name for these medicines is gonadotropin-releasing hormone agonists and antagonists.
Medicines that stop testosterone from acting on cancer cells. These medicines, known as anti-androgens, are often used with LHRH agonists. That's because LHRH agonists can cause a brief rise in testosterone levels before testosterone levels go down.
Surgery to remove the testicles, called an orchiectomy. Surgery to remove both testicles lowers testosterone levels in the body quickly. A version of this procedure removes only the tissue that makes testosterone, not the testicles. Surgery to remove the testicles can't be reversed.
What you can expect
LHRH agonists and antagonists
LHRH agonist and antagonist medicines stop the testicles from making testosterone.
Most of these medicines are given as a shot under the skin or into a muscle. They're given monthly, every three months or every six months. Or they can be put under the skin as an implant. The implant slowly releases medicines over time.
LHRH agonists include:
Leuprolide (Eligard, Lupron Depot, others).
Goserelin (Zoladex).
Triptorelin (Trelstar).
LHRH antagonists include:
Degarelix (Firmagon).
Relugolix (Orgovyx).
Testosterone levels might rise briefly, called a flare, for a few weeks after an LHRH agonist. LHRH antagonists don't cause a testosterone flare.
Cutting the risk of a flare is important for those who have pain or other cancer symptoms. An increase in testosterone can make symptoms worse. Taking an anti-androgen either before or with an LHRH agonist can cut the risk of flare.
Anti-androgens
Anti-androgens keep testosterone from acting on cancer cells. These oral medicines often are taken with an LHRH agonist or before taking an LHRH agonist.
Anti-androgens include:
Bicalutamide (Casodex).
Flutamide.
Nilutamide (Nilandron).
Apalutamide (Erleada).
Darolutamide (Nubeqa).
Enzalutamide (Xtandi).
Other androgen-blocking medicines
When hormone therapy treatment stops the testicles from making testosterone, other cells in the body might make testosterone that can cause prostate cancer cells to grow. Other hormone therapy medicines can stop these other sources of testosterone. The medicines might be used when prostate cancer remains or comes back. These medicines are sometimes mixed with corticosteroids, such as prednisone. These medicines include:
Abiraterone (Yonsa, Zytiga).
Ketoconazole.
These medicines treat advanced prostate cancer that no longer responds to other hormone therapy treatments.
Orchiectomy
This treatment to remove the testicles is rarely used. After numbing the groin area, a surgeon cuts into the groin and takes the testicle through the opening. The surgeon repeats the process for the other testicle.
All surgery carries a risk of pain, bleeding and infection. Most people can go home after this operation. It usually doesn't require staying the hospital.
Results
If you take hormone therapy for prostate cancer, you'll have regular follow-up meetings with your doctor. Your doctor may ask about any side effects you're experiencing. Many side effects can be controlled.
Your doctor might order tests to check your health and watch for signs that the cancer is coming back or getting worse. Results of these tests can show your response to hormone therapy. The treatment might be adjusted, if needed. | Overview
Prostate cancer
Prostate cancer
Prostate cancer occurs in the prostate gland. The gland sits just below the bladder in males. It surrounds the top part of the tube that drains urine from the bladder, called the urethra. This illustration shows a healthy prostate gland and a prostate gland with cancer.
Hormone therapy for prostate cancer is a treatment that stops the hormone testosterone either from being made or from reaching prostate cancer cells.
Most prostate cancer cells rely on testosterone to grow. Hormone therapy causes prostate cancer cells to die or to grow more slowly.
Hormone therapy for prostate cancer may involve medicines or possibly surgery to remove the testicles.
Hormone therapy for prostate cancer also is known as androgen deprivation therapy.
Risks
Not being able to get or keep an erection, called erectile dysfunction.
Bone thinning, which can lead to broken bones.
Hot flashes.
Less body hair, smaller genitals and growth of breast tissue.
Tiredness.
Diabetes.
Heart disease.
Intermittent dosing
In certain situations, doctors may recommend taking hormone therapy medicines for a set amount of time or until the PSA level is very low. Then the medicine is stopped. For some people, this approach can help reduce the side effects of hormone therapy. If the prostate cancer comes back or gets worse, it might be necessary to start the medicines again.
Early research shows that starting and stopping hormone therapy medicines, sometimes called intermittent dosing, may lower the risk of side effects without affecting long-term survival. And this dosing approach might improve quality of life.
How you prepare
Medicines that stop the testicles from making testosterone. Certain medicines stop cells from getting the signals that tell them to make testosterone. These medicines are called luteinizing hormone-releasing hormone (LHRH) agonists and antagonists. | yes |
Urology | Can testosterone increase the risk of prostate cancer? | yes_statement | "testosterone" can "increase" the "risk" of "prostate" "cancer".. higher levels of "testosterone" can lead to an "increased" "risk" of "prostate" "cancer". | https://newsroom.uw.edu/news/study-testosterone-therapy-does-not-raise-prostate-cancer-risk | Study: Testosterone therapy does not raise prostate cancer risk ... | Study: Testosterone therapy does not raise prostate cancer risk
Testosterone, the hormone made in the testicles, drives men’s sexual development and physical strength.
In the past 30 years, millions of men globally have been diagnosed with low testosterone levels and been prescribed supplemental testosterone as therapy – even as oncologists have confirmed testosterone as an agent that fuels prostate cancer and have treated the disease by reducing patients’ levels of the hormone.
With this backdrop comes research today showing that, among nearly 150,000 men over age 40 with low testosterone levels, treatment with testosterone was not associated with increased risk for aggressive prostate cancer.
“This finding doesn’t change the guidelines for how we recommend testosterone therapy,” said Walsh, an associate professor of urology at the University of Washington School of Medicine and clinician at VA Puget Sound. “Men should still have their testosterone diagnosed appropriately, with multiple readings, and be counseled about risks and benefits of treatment. But this large foundation of evidence allows us to look patients in the eye and say testosterone therapy does not appear to increase risk of prostate cancer over a moderate duration.”
Testosterone, the hormone made in the testicles, drives men’s sexual development as well as physical strength and bone health. Its level decreases naturally with age. About 2 percent of adult men have a diagnostically low level of testosterone, according to the American Urological Association. Symptoms of “Low-T,” as it’s commonly called, include fatigue, reduced muscle mass, irritability and low sex drive.
Researchers examined the Veterans Affairs health system records of 147,593 men diagnosed with low testosterone between 2002 and 2011. Within six months of that diagnosis, all the men also had normal findings for prostate specific antigen (PSA), the main indicator of prostate cancer. Within this population, 58,617 received testosterone therapy. Three years was the median duration that patients were followed.
The researchers focused on the development of aggressive prostate cancer, Walsh said.
“We now know that the nonaggressive variations can simply be followed over time and may not lead to significant increases in morbidity or mortality. So for the study, we thought it was more important to identify the high risk prostate cancer associated with very high PSA or known histologically to be prone to spread,” he said.
The study reported that men who received testosterone therapy were subsequently diagnosed with aggressive prostate cancer at the rate of 0.58 per 1,000 person years. Among untreated men (n = 88,976) the incidence rate was nearly identical: 0.57 per 1,000 person years.
A major strength of the study, Walsh said, was the VA’s closed medical and pharmacy system, for three reasons:
It reduced the likelihood that patients in the study received relevant care outside of the system.
The patients’ data sets also included full medical histories, enabling researchers to better control for other serious illnesses that affected the study population’s mortality.
Most recipients of testosterone therapy received intramuscular injection, the typical delivery mechanism during the study span and also the most biologically available testosterone therapy. “Many previous studies of testosterone delivered in topical cream or patch have shown that men never achieve a biologically therapeutic level. In our study, most men received injections and had follow-up tests that proved that their testosterone levels actually rose with the therapy,” Walsh said.
In the United States, testosterone prescriptions have increased significantly over the past decade, due in part to the aging population and in part to pharmaceutical manufacturers’ marketing efforts aimed at aging men. Clinical guidelines state that prescriptions are appropriate for men who have repeated findings of low testosterone combined with specific symptoms, but Walsh acknowledged physicians’ increasingly common practice of prescribing testosterone to a patient after only one such finding and with nonspecific symptoms.
“We know that most of these men are treated for relatively short durations,” Walsh said, describing a short duration as lasting less than 12 months.
The study was supported by a grant (R01 AG042934-01) from the National Institutes of Health –National Institute on Aging and the U.S. Department of Veterans Affairs. | Study: Testosterone therapy does not raise prostate cancer risk
Testosterone, the hormone made in the testicles, drives men’s sexual development and physical strength.
In the past 30 years, millions of men globally have been diagnosed with low testosterone levels and been prescribed supplemental testosterone as therapy – even as oncologists have confirmed testosterone as an agent that fuels prostate cancer and have treated the disease by reducing patients’ levels of the hormone.
With this backdrop comes research today showing that, among nearly 150,000 men over age 40 with low testosterone levels, treatment with testosterone was not associated with increased risk for aggressive prostate cancer.
“This finding doesn’t change the guidelines for how we recommend testosterone therapy,” said Walsh, an associate professor of urology at the University of Washington School of Medicine and clinician at VA Puget Sound. “Men should still have their testosterone diagnosed appropriately, with multiple readings, and be counseled about risks and benefits of treatment. But this large foundation of evidence allows us to look patients in the eye and say testosterone therapy does not appear to increase risk of prostate cancer over a moderate duration.”
Testosterone, the hormone made in the testicles, drives men’s sexual development as well as physical strength and bone health. Its level decreases naturally with age. About 2 percent of adult men have a diagnostically low level of testosterone, according to the American Urological Association. Symptoms of “Low-T,” as it’s commonly called, include fatigue, reduced muscle mass, irritability and low sex drive.
Researchers examined the Veterans Affairs health system records of 147,593 men diagnosed with low testosterone between 2002 and 2011. Within six months of that diagnosis, all the men also had normal findings for prostate specific antigen (PSA), the main indicator of prostate cancer. Within this population, 58,617 received testosterone therapy. Three years was the median duration that patients were followed.
| no |
Urology | Can testosterone increase the risk of prostate cancer? | yes_statement | "testosterone" can "increase" the "risk" of "prostate" "cancer".. higher levels of "testosterone" can lead to an "increased" "risk" of "prostate" "cancer". | https://www.urologyhealth.org/urology-a-z/a_/advanced-prostate-cancer | Prostate Cancer – Advanced: Symptoms, Diagnosis & Treatment ... | The Urology Care Foundation Humanitarian Program recognizes and supports individuals and projects that provide direct urologic patient care for impoverished individuals and communities in underserved areas, either within or outside the United States.
What is Advanced Prostate Cancer?
When prostate cancer spreads beyond the prostate or returns after treatment, it is often called advanced prostate cancer.
Prostate cancer is often grouped into four stages, with stages III and IV being more advanced prostate cancer.
Early Stage | Stages I & II: The tumor has not spread beyond the prostate.
Locally Advanced | Stage III: Cancer has spread outside the prostate but only to nearby tissues.
Advanced | Stage IV: Cancer has spread outside the prostate to other parts such as the lymph nodes, bones, liver or lungs.
When an early stage prostate cancer is found, it may be treated or placed on surveillance (watching closely). Advanced prostate cancer is not “curable,” but there are many ways to treat it. Treatment can help slow advanced prostate cancer progression.
There are several types of advanced prostate cancer, including:
Biochemical Recurrence
With biochemical recurrence, the prostate-specific antigen (PSA) level has risen after treatment(s) using surgery or radiation, with no other sign of cancer.
Castration-Resistant Prostate Cancer (CRPC)
Castration-resistant prostate cancer (CRPC) is a form of advanced prostate cancer. CRPC means the prostate cancer is growing or spreading even though testosterone levels are low from hormone therapy. Hormone therapy is also called testosterone depleting therapy or androgen deprivation treatment (ADT) and can help lower your natural testosterone level. It is given through medicine or surgery to most men with prostate cancer to reduce the testosterone “fuel” that makes this cancer grow. That fuel includes male hormones or androgens (like testosterone). Typically, prostate cancer growth slows down with hormone therapy, at least for some time. If the cancer cells begin to "outsmart" hormone treatment, they can grow even without testosterone. If this happens, the prostate cancer is considered CRPC.
Non-Metastatic Castration-Resistant Prostate Cancer (nmCRPC)
Prostate cancer that no longer responds to hormone treatment and is only found in the prostate. This is found by a rise in the PSA level, while the testosterone level stays low. Imaging tests do not show signs the cancer has spread.
Metastatic Prostate Cancer
Cancer cells have spread beyond the prostate. Cancer spread may be seen on imaging studies and may show the cancer has spread. Prostate cancer is metastatic if it has spread to these areas:
Lymph nodes outside the pelvis
Bones
Other organs, such as liver or lungs
You may be diagnosed with metastatic prostate cancer when you are first diagnosed, after having completed your first treatment or even many years later. It is uncommon to be diagnosed with metastatic prostate cancer on first diagnosis, but it does happen.
Metastatic Hormone-Sensitive Prostate Cancer (mHSPC)
Metastatic hormone-sensitive prostate cancer (mHSPC) is when cancer has spread past the prostate into the body and is responsive to hormone therapy or the patient has not yet had hormone therapy. This means that levels of male sex hormones, including androgens like testosterone, can be reduced to slow cancer growth. Unchecked, these male sex hormones “feed” the prostate cancer cells to let them grow. Hormone therapy, like ADT, may be used to reduce the levels of these hormones.
Metastatic Castration-Resistant Prostate Cancer (mCRPC)
Metastatic castration-resistant prostate cancer is when cancer has spread past the prostate into the body and it is able to grow and spread even after treatments were used to lower testosterone levels. The PSA levels keep rising and metastatic spots are present/growing. This is disease progression despite medical or surgical castration.
Symptoms
Men with advanced prostate cancer may or may not have any signs of sickness. Symptoms depend on the size of new growth and where the cancer has spread in the body. With advanced disease, mainly if you have not had treatment to the prostate itself, you may have problems passing urine or see blood in your urine. Some men may feel tired, weak or lose weight. When prostate cancer spreads to bones, you may have bone pain. Tell your doctor and nurse about any pain or other symptoms you feel. There are treatments that can help.
Risks
Your risks for prostate cancer rise if you are age 65 or older, have a family history of prostate cancer, are African American or have inherited mutations of the BRCA1 or BRCA2 genes.
Age: For all men, prostate cancer risk increases with age. About 6 in 10 cases of prostate cancer are found in men older than 65. Prostate cancer is rare in men under the age of 40.
Race/Ethnicity: African American men and Caribbean men of African ancestry face a higher risk for being diagnosed with prostate cancer. They are also more likely to be diagnosed with prostate cancer at younger ages. It is not clear why prostate cancer affects African American men more than other racial/ethnic groups.
Genetic Factors: The risk of prostate cancer more than doubles in men with a family history of prostate cancer in their grandfathers, fathers or brothers. Having family members with breast and ovarian cancer also raises a man’s risk for prostate cancer. That is because breast, ovarian and prostate cancers share some of the same genes, including BRCA1 and BRCA2.If a person has any of these mutations, they should be screened earlier or more often for prostate cancer. As a health care tool, genetic test results can help determine whether a certain treatment would be helpful. For example, men with an inherited poly- (ADP)-ribose polymerase (PARP) mutation in the DNA of cancer cells could be helped with a PARP inhibitor. This targeted therapy inhibits the PARP mutation and helps stop it from repairing cancer cells. Your doctor may suggest genetic testing because of family history or because you have an aggressive prostate cancer. Genetic testing looks for certain inherited changes (mutations) in a person’s genes and can help find out if a cancer is hereditary. To find out if you have a genetic mutation linked to prostate cancer, you may take a simple blood or saliva test.
Diagnosis
Advanced cancer may be found before, at the same time or later than the main tumor. Most men diagnosed with advanced prostate cancer have had biopsy and treatment in the past. When a new tumor is found in someone who has been treated for cancer in the past, usually cancer has spread. Even if you have already been diagnosed with prostate cancer, your health care provider may want to observe changes over time. The following tests are used to diagnose and track prostate cancer:
Blood Tests
The PSA blood test measures a protein in your blood called the prostate-specific antigen (PSA). Only the prostate and prostate cancers make PSA. Results for this test are usually shared as nanograms of PSA per milliliter (ng/mL) of blood. The PSA test is used to look for changes to the way your prostate produces PSA. It is used to stage cancer, plan treatment and track how well treatment is going. A rapid rise in PSA may be a sign something is wrong. In addition, your doctor may want to test the level of testosterone in your blood.
Advanced cancer may be found before, at the same time, or later than the main tumor. Most men diagnosed with advanced prostate cancer have had biopsy and treatment in the past. When a new tumor is found in someone who has been treated for cancer in the past, usually cancer has spread.
Digital Rectal Exam (DRE)
The Digital Rectal Exam (DRE) is a physical exam used to help your doctor feel for changes in your prostate. This test is also used to screen for and stage cancer, or track how well treatment is going. During this test, the doctor feels for an abnormal shape, consistency, nodularity or thickness to the gland. The DRE is often done with the PSA together. For this exam, the health care provider puts a lubricated gloved finger into the rectum.
Imaging and Scans
Imaging helps doctors learn more about your cancer. Some types are:
Magnetic resonance imaging (MRI): An MRI scan can give a very clear picture of the prostate and show if the cancer has spread into the seminal vesicles or nearby tissue. A contrast dye is often injected into a vein before the scan to see details. MRI scans use radio waves and strong magnets instead of x-rays.
Computed tomography (CT) scan: The CT scan is used to see cross-sectional views of tissue and organs. It combines x-rays and computer calculations for detailed images from different angles. It can show solid vs. liquid structures, so it is used to diagnose masses in the urinary tract. CT scans are not always as useful as MRI to see the prostate gland itself, but are very good at evaluating surrounding tissues and structures.
Bone scan: A bone scan can help show if cancer has reached the bones. If prostate cancer spreads to distant sites, it often goes to the bones first. In these studies, a radionuclide dye is injected into the body. Over a few hours, images are taken of the bones. The dye helps to make images of cancer show up more clearly.
Positron emission tomography (PET) scan: The PET scan may help your doctor better see where and how much the cancer is growing. A special drug (called a tracer) is given through your vein, or you may inhale or swallow the drug. Your cells will pick up the tracer as it passes through your body. The scanner allows your doctor to better see where and how much the cancer is growing.
Biopsy
Men diagnosed with advanced prostate cancer from the beginning may start with a prostate biopsy. It is also used to grade and stage the cancer. Most men diagnosed with advanced prostate cancer have had a prostate biopsy in the past. When a new tumor is found in someone who has been treated before, it is usually cancer that has spread.
A biopsy is a tissue sample taken from your prostate or other organs to look for cancer cells. There are many approaches to prostate biopsies. These can be done through a probe placed in the rectum, through the skin of the perineum (already between the scrotum and rectum) and may use a specialized imaging device, such as MRI. The biopsy removes small pieces of tissue for review under a microscope. The biopsy takes 10 to 20 minutes. A pathologist (a doctor who classifies disease) looks for cancer cells within the samples. If cancer is seen, the pathologist will "grade" the tumor.
Grading and Staging
Prostate cancer is grouped into four stages. The stages are defined by how much and how quickly the cancer cells are growing. The stages are defined by the Gleason Score and the T (tumor), N (node), M (metastasis) Score.
Gleason Score
If a biopsy finds cancer, the pathologist gives it a grade. The most common grading system is called the Gleason grading system. The Gleason score is a measure of how quickly the cancer cells can grow and affect other tissue. Biopsy samples are taken from the prostate and given a Gleason Grade by a pathologist. Lower grades are given to samples with small, closely packed cells. Higher grades are given to samples with more spread out cells. The Gleason Score is set by adding together the two most common grades found in a biopsy sample.
The Gleason score will help your doctor understand if the cancer is as a low-, intermediate- or high-risk disease. The risk assessment is the risk of recurrence after treatment. Generally, Gleason scores of 6 are treated as low risk cancers. Gleason scores of around 7 are treated as intermediate/mid-level cancers. Gleason scores of 8 and above are treated as high-risk cancers. Some of these high-risk tumors may have already spread by the time they are found.
Staging
The Tumor, Nodes and Metastasis (TNM) is the system used for tumor staging. The T, N, M score is a measure of how far the prostate cancer has spread in the body. The T (tumor) score rates the size and extent of the original tumor. The N (nodes) score rates whether the cancer has spread into nearby lymph nodes. The M (metastasis) score rates whether the cancer has spread to distant sites.
Tumors found only in the prostate are more successfully treated than those that have metastasized (spread) outside the prostate. Tumors that have metastasized are incurable and require drug based therapies to treat the whole body.
Prostate Cancer Stage Groupings
Prostate cancer is staged as:
T1: Health care provider cannot feel the tumor
T1a: Cancer present in less than 5% of the tissue removed and low grade (Gleason less than 6)
T1b: Cancer present in more than 5% of the tissue removed or is of a higher grade (Gleason greater than 6)
T1c: Cancer found by needle biopsy done because of a high PSA
T2: Health care provider can feel the tumor with a DRE but the tumor is confined to prostate
T2a: Cancer found in one half or less of one side (left or right) of the prostate
T2b: Cancer found in more than half of one side (left or right) of the prostate
T2c: Cancer found in both sides of the prostate
T3: Cancer has begun to spread outside the prostate and may involve the seminal vesicles
T3a: Cancer extends outside the prostate but not to the seminal vesicles
T3b: Cancer has spread to the seminal vesicles
T4: Cancer has spread to nearby organs
N0: There is no sign of the cancer moving to the lymph nodes in the area of the prostate (becomes N1 if cancer has spread to lymph nodes)
M0: There is no sign of tumor metastasis (becomes M1 if cancer has spread to other parts of the body)
Treatment
The goal of advanced prostate cancer treatment is to shrink or control tumor growth and control symptoms. There are many treatment choices for advanced prostate cancer. Which treatment to use, and when, will depend on discussions with your doctor. It is best to talk to your doctor about how to handle side effects before you choose a plan.
Treatment options include:
Hormone Therapy
Chemotherapy
Immunotherapy
Combination Therapy
Bone-targeted Therapy
Radiation
Active Surveillance
Clinical Trials
What is Hormone Therapy?
Hormone therapy is a treatment that lowers a man's testosterone, or hormone, levels. This therapy is also called ADT. Testosterone, an important male sex hormone, is the main fuel for prostate cancer cells, so reducing its levels may slow the growth of those cells. Hormone therapy may help slow prostate cancer growth in men when prostate cancer has metastasized (spread) away from the prostate or returned after other treatments. Some treatments may be used to shrink or control a local tumor that has not spread. There are several types of hormone therapy for prostate cancer treatment, including medications and surgery. Your doctor may prescribe a variety of therapies over time.
Hormone Therapy with Surgery
Surgery to remove the testicles for hormone therapy is called orchiectomy or castration. When the testicles are removed, it stops the body from making the hormones that fuel prostate cancer. It is rarely used as a treatment choice in the United States. Men who choose this therapy want a one-time surgical treatment. They must be willing to have their testicles permanently removed and must be healthy enough to have surgery.
This surgery allows the patient to go home the same day. The surgeon makes a small cut in the scrotum (sac that holds the testicles). The testicles are detached from blood vessels and removed. The vas deferens (tube that carries sperm to the prostate before ejaculation) is detached. Then the sac is sewn up.
There are multiple benefits to undergoing orchiectomy to treat advanced prostate cancer. It is not expensive. It is simple and has few risks. It only needs to be performed once. It is effective right away. Testosterone levels drop dramatically.
Side effects to your body include infection and bleeding. Removing the testicles means the body stops making testosterone, so there is also a chance of the side effects listed below for hormone therapy. Other side effects of this surgery may be about body image due to the look of the genital area after surgery. Some men choose to have artificial testicles or saline implants placed in the scrotum to help the scrotum look the same as before surgery. Some men choose another surgery called subcapsular orchiectomy. This removes the glands inside the testicles, but it leaves the testicles themselves, so the scrotum looks normal.
Hormone Therapy with Medications
There are different types of hormone therapy available as injections or as pills that can be taken by mouth. Some of these therapies stop the body from producing luteinizing-hormone-releasing-hormone (LHRH, also called gonadotrophin releasing hormone, or GnRH). LHRH triggers the body to make testosterone. Other therapies stop prostate cells from being affected by testosterone by blocking hormone receptors. Sometimes, after the first shot, a blood test is done. This is done to check testosterone levels. You may also have tests to monitor your bone density during treatment.
With LHRH treatment there is no need for surgery. Candidates for this treatment include men who cannot or do not wish to have surgery to remove their testicles.
There are different types of medical hormone therapy your doctor could prescribe to lower your body's production of testosterone. After your testosterone levels drop to a very low level, you are at "castration level." Once testosterone levels drop, prostate cancer cells may decrease in growth and proliferation.
Types of Medications
Agonists (analogs)
LHRH/GnRH agonists are drugs that lower testosterone levels. They may be used for cancer that has come back, whether or not it has spread.
When first given, agonists cause the body to produce a burst of testosterone (called a "flare"). Agonists are longer acting than natural LHRH. After the initial flare, the drug tricks your brain into thinking it does not need to produce LHRH/GnRH because it has enough. As a result, the testicles are not stimulated to produce testosterone.
LHRH or GnRH agonists are given as shots or as small pellets placed under the skin. Based on the drug used, they could be given from once every one, three or six months.
Antagonists
These drugs also lower testosterone. Instead of flooding the pituitary gland with LHRH, they stop LHRH from binding to receptors. There is no testosterone flare with an LHRH/GnRH antagonist because the body does not get the signal to produce testosterone.
Antagonists may be taken by mouth or injected (shot) under the skin, in the buttocks or abdomen. The shot is given in the health care provider's office. You will likely stay in the office awhile after the shot to ensure there is no allergic reaction. After the first shot, a blood test makes sure testosterone levels have dropped. You may also have tests to monitor bone density.
Anti-androgen drugs
Antiandrogen drugs are taken as a pill by mouth. This therapy depends partly on where the cancer has spread and its effects.
This treatment lowers testosterone by inhibiting the androgen receptors in the prostate cancer cells. Normally, testosterone would bind with these receptors to fuel growth of prostate cancer cells. With the receptors blocked, testosterone cannot "feed" the prostate. Using anti-androgens a few weeks before, or during, LHRH therapy may reduce "flare ups." Antiandrogens may also be used after surgery or castration when hormone therapy stops working.
CAB (combined androgen reducing treatment, with anti-androgens)
This method blends castration (by surgery or with the drugs described above) and antiandrogen drugs. The treatment reduces production of testosterone and can help stop it from binding to cancer cells.
Surgery or taking oral drugs may be ways to lower the testosterone made by your testicles. The rest of the testosterone is made by the adrenal glands. Antiandrogen therapy blocks testosterone made by the adrenal glands.
Androgen synthesis inhibitors
These drugs help stop other parts of your body (and the cancer itself) from making more testosterone and its metabolites. Men newly diagnosed with metastatic hormone sensitive prostate cancer (mHSPC) or men with metastatic castration-resistant prostate cancer (mCRPC) may be good candidates for this therapy.
Androgen synthesis inhibitors may be taken by mouth as a pill. This drug helps stop your body from releasing the enzyme needed to make androgens in the adrenal glands, testicles and prostate tissue, resulting in reduced levels of testosterone and other androgens. Because of the way it works, this drug must be taken with an oral steroid.
Hormone Therapy Side Effects
Unfortunately, hormone therapy may not work forever, and it does not cure the cancer. Over time, the cancer may grow in spite of the low hormone level. Other treatments are also needed to manage the cancer.
Hormone therapies have many possible side effects. Learn what they are. Intermittent (not constant) hormone therapy may also be a treatment option. Before starting any type of hormone therapy, talk with your health care provider.
Possible hormone therapy side effects include:
Lower libido (sexual desire) in most men
Erectile dysfunction, the inability to have or keep a strong enough erection for sex
Hot flashes or sudden spread of warmth to the face, neck and upper body, heavy sweating
There are many benefits and risks to each type of hormone therapy, so ask questions of your doctor so you understand what is best for you.
What is Chemotherapy?
Chemotherapy drugs can slow the growth of cancer. These drugs may reduce symptoms and extend life. Or they may ease pain and symptoms by shrinking tumors. Chemotherapy is useful for men whose cancer has spread to other parts of the body.
Most chemotherapy drugs are given through a vein (intravenous, IV). During chemotherapy, the drugs move throughout the body. They kill quickly growing cancer cells and non-cancer cells. Often, chemotherapy is not the main therapy for prostate cancer. But it may be a treatment option for men whose cancer has spread. Chemotherapy may be given before pain starts as it may prevent pain as cancer spreads to bones and other sites.
Side effects may include hair loss, fatigue, nausea and vomiting. There may be changes in your sense of taste and touch. You may be more prone to infections. You may experience neuropathy (tingling or numbness in the hands and feet). Due to the side effects from chemotherapy, the decision to use these drugs may be based on:
Your health and how well you can tolerate the drug
What other treatments you have tried
If radiation is needed to relieve pain quickly
What other treatments or clinical trials are available
Your treatment goals
If you use chemotherapy, your health care team may watch you closely to manage side effects. There are medicines to help with things like nausea. Most side effects stop once chemotherapy ends.
What is Immunotherapy?
Immunotherapy uses the body’s immune system to fight cancer. It may be a choice for men with mCRPC who have no symptoms or only mild symptoms.
If the cancer returns and spreads, your doctor may offer a cancer vaccine to boost your immune system so it can attack the cancer cells. Immunotherapy may be given to mCRPC patients before chemotherapy or it may be used along with chemotherapy.
Side effects are often in the first 24 hours after treatment and may include fever, chills, weakness, headache, nausea, vomiting and diarrhea. Patients may also have low blood pressure and rashes.
What is Combination Therapy?
Side effects are often in the first 24 hours after treatment and may include fever, chills, weakness, headache, nausea, vomiting and diarrhea. Patients may also have low blood pressure and rashes.
What is Bone-targeted Therapy?
Bone-targeted therapy may help men with prostate cancer that has spread to the bones as they may get “skeletal-related events” (SREs). SREs include fractures, pain and other problems. If you have advanced prostate cancer or are taking hormone therapy, your provider may suggest calcium, Vitamin D or other drugs for your bones. These drugs may stop the cancer, reduce SRE’s and help prevent pain and weakness from cancer growing in your bones.
Radiopharmaceuticals are drugs with radioactivity. They can be used to help with bone pain from metastatic cancer. Some may also be used for men whose mCRPC has spread to their bones. They may be offered when ADT is not working. Radiopharmaceuticals give off small amounts of radiation that go to the exact parts where cancer cells are growing.
Drugs used to reduce SREs may help reduce bone turnover. Side effects include low calcium, worsening kidney function and, rarely, destruction of the jawbone.
Calcium and Vitamin D are also used to help protect your bones. They are often recommended for men on hormone therapy to treat prostate cancer.
What is Radiation Therapy?
Radiation uses high-energy beams to kill tumors. Prostate cancer often spreads to the bones. Radiation can help ease pain or prevent fractures caused by cancer spreading to the bone.
There are many types of radiation treatments. Radiation may be given once or over several visits. Treatment is like having an x-ray. It uses high-energy beams to kill tumors. Some radiation techniques focus on saving nearby healthy tissue. Computers and software allows better planning and targeting of radiation doses. They target the radiation to pinpoint where it is needed.
Active Surveillance for Prostate Cancer
Active surveillance is mainly used to delay or avoid aggressive therapy. It is often used if you have a small, slow growing cancer. It may be a choice for men who do not have symptoms or want to avoid sexual, urinary or bowel side effects for as long as possible. Others may choose surveillance due to their age or overall health.
This method may require you to have many tests over time to track cancer growth. This lets your doctor know how things are going, and prevents treatment-related side effects. This will also help you and your health care team focus on managing cancer-related symptoms. Talk with your care team about whether this is a good choice for you.
Clinical Trials
Clinical trials are research studies that test new treatments or learn how to use existing treatments better. Clinical studies aim to find the treatment strategies that work best for certain illnesses or groups of people. For some patients, taking part in a clinical trial may be a treatment option.
Clinical trials follow strict scientific standards. These standards protect patients and help produce reliable study results. You will be given either a standard treatment or the treatment being tested. All of the approved treatments used to treat or cure cancer began in a clinical trial.
It is of great value to learn about the risks and benefits of the treatment being studied.
Other Considerations
Follow-Up Care
You and your doctor may schedule office visits for tests and follow-up over time. There are certain symptoms your doctor should know about right away, such as blood in your urine or bone pain, but it is best to ask your health care team about the symptoms you should report. Some men find it helpful to keep a diary to help remember things to talk about during follow-up visits.
Incontinence is the inability to control the release of urine and can sometimes happen with prostate cancer treatment. There are different types of incontinence:
Stress Urinary Incontinence (SUI), when urine leaks with coughing, laughing, sneezing or exercising or with any additional pressure on the pelvic floor muscles. This is the most common type.
Urge Incontinence, or the sudden urge to pass urine, even when the bladder is not full, because the bladder is overly sensitive. This might be called overactive bladder (OAB).
Mixed Incontinence, a combination of stress and urge incontinence with symptoms from both types.
Because incontinence may affect your physical and emotional recovery, it is important to understand how to manage this problem. There are treatment choices that may help incontinence. Talk with your doctor before trying any of these options.
Men may have sexual health problems following their cancer diagnosis or treatments. Erectile dysfunction (ED) is when a man finds it hard to get or keep an erection strong enough for sex. ED happens when there is not enough blood flow to the penis or when nerves to the penis are harmed.
Cancer in the prostate, colon, rectum and bladder are the most common cancers that can affect a man’s sexual health. Treatments for cancer, along with emotional stress, can lead to ED.
The chance of ED after prostate cancer treatment depends on many things, such as:
Age
Overall health
Medications you take
Sexual function before treatment
Cancer stage
Damage to your nerves or blood vessels from surgery or radiation
There are treatments that may help ED. They include pills, vacuum pumps, urethral suppositories, penile injections and implants. Treatment can be individualized. Some treatments may work better for you than others. They have their own set of side effects. A health care provider can talk with you about the pros and cons of each method and help you decide which single treatment or combination of treatments is right for you.
Because prostate cancer treatment can affect your appetite, eating habits and weight, it is important to try your best to eat healthy. If you have a hard time eating well, reach out to a registered dietitian/nutritionist (RDN).There are ways to help you get the nutrition you need. Always talk with your doctor before making changes to your diet.
Exercise
Exercise may improve your physical and emotional health. It can also help you manage your weight, maintain muscle and bone strength and help manage side effects.
Always talk with your doctor before starting or changing your exercise routine. If approved by your doctor, men may want to strive to exercise about one to three hours per week. Cardiovascular exercise and strength/resistance training may be good choices. This can include walking or more intense exercise. Physical exercise may help you to:
Make a Difference
This web site has been optimized for user experience and security, therefore Internet Explorer(IE) is not a recommended browser.Please use the latest version of Microsoft Edge, Chrome, Firefox or Safari(MacOS). Thank you. | This means that levels of male sex hormones, including androgens like testosterone, can be reduced to slow cancer growth. Unchecked, these male sex hormones “feed” the prostate cancer cells to let them grow. Hormone therapy, like ADT, may be used to reduce the levels of these hormones.
Metastatic Castration-Resistant Prostate Cancer (mCRPC)
Metastatic castration-resistant prostate cancer is when cancer has spread past the prostate into the body and it is able to grow and spread even after treatments were used to lower testosterone levels. The PSA levels keep rising and metastatic spots are present/growing. This is disease progression despite medical or surgical castration.
Symptoms
Men with advanced prostate cancer may or may not have any signs of sickness. Symptoms depend on the size of new growth and where the cancer has spread in the body. With advanced disease, mainly if you have not had treatment to the prostate itself, you may have problems passing urine or see blood in your urine. Some men may feel tired, weak or lose weight. When prostate cancer spreads to bones, you may have bone pain. Tell your doctor and nurse about any pain or other symptoms you feel. There are treatments that can help.
Risks
Your risks for prostate cancer rise if you are age 65 or older, have a family history of prostate cancer, are African American or have inherited mutations of the BRCA1 or BRCA2 genes.
Age: For all men, prostate cancer risk increases with age. About 6 in 10 cases of prostate cancer are found in men older than 65. Prostate cancer is rare in men under the age of 40.
Race/Ethnicity: African American men and Caribbean men of African ancestry face a higher risk for being diagnosed with prostate cancer. They are also more likely to be diagnosed with prostate cancer at younger ages. It is not clear why prostate cancer affects African American men more than other racial/ethnic groups. | yes |
Urology | Can testosterone increase the risk of prostate cancer? | yes_statement | "testosterone" can "increase" the "risk" of "prostate" "cancer".. higher levels of "testosterone" can lead to an "increased" "risk" of "prostate" "cancer". | https://www.everlywell.com/blog/testosterone/unhealthy-testosterone-levels-men/ | Unhealthy Testosterone Levels in Men: Causes and Symptoms ... | Unhealthy testosterone levels in men: causes and symptoms
If the âblack dogâ of depression has reared its ugly head at some point in your life, then low testosterone levels may have been at play. But this is hardly the only possible consequence of low testosterone, which you can detect at home with the Everlywell Testosterone Test.
So keep reading to learn more about testosterone in men, including causes and symptoms of âlow T.â
What is testosterone?
Testosterone is the primary hormone behind muscle-building, fat-burning, libido, and even strongly affects mood and energy.Â
The testicles are the main source of testosterone production in men while the ovaries are in charge of producing this sex hormone in women. However, in women, levels of testosterone are typically lower compared to men.
In general, men begin to experience an increase in testosterone production during puberty, with testosterone levels gradually declining starting at about age 30. When natural testosterone levels begin to lower, both men and women can experience a number of different symptoms.
Signs and symptoms of low testosterone in men
Low testosterone levels in men can lead to symptoms that can affect many different aspects of health and well-being. Many men that experience a decrease in testosterone report sleep disturbances and insomnia, emotional changes such as depression, and issues related to their sexual performance/desires. Along with these symptoms, some men even face changes in fertility, decreased strength, and weight gain.
Athletic performance can also suffer due to loss of energy, as well as increased difficulty building muscle and burning fat. Having greater body fat and less muscle can then potentially increase the risk of heart disease, diabetes, and other conditions dependent on an optimal metabolism.
Signs of low testosterone in men can include:
Loss of motivation
Low libido
Fat gain
Sleep problems and/or fatigue
Low testosterone can cause unwanted health consequences in men. The effects of low testosterone may lead to:
Osteoporosis (where your bones become very brittle)
Infertility
Depression
Obesity
Erectile dysfunction (ED)
Loss of muscle mass
Having greater fat and less muscle can then potentially increase the risk of heart disease, diabetes, and other health conditions
Note that average levels of testosterone decrease as men age. Starting around age 30, testosterone decreases about 1% per year, on average. This decline is part of the normal aging process, so some older men develop abnormally low testosterone levels.
The Everlywell at-home Testosterone Test can help men identify if their hormone levels are lower than whatâs typical for their age. If your testosterone levels are low, you can share your Everlywell results with your healthcare provider, and collaborate on a plan for a healthier lifestyle and/or medication that may help improve your testosterone levels.
What is testosterone replacement therapy and is it an effective treatment?
Testosterone replacement therapy, or TRT, can help treat some low testosterone symptoms in men. Doctors often recommend TRT as a treatment option for male hypogonadism, a condition in which the body doesnât make enough testosterone (often due to testicular failure). Ultimately, this condition can lead to symptoms of low testosterone in males.
Testosterone therapy can improve muscle strength and erectile function in hypogonadal men, as well as boost energy and protect against bone loss.
Testosterone can be administered in several different ways, including skin patches, gels applied to the skin, injections, and implants.
Testosterone replacement therapy and prostate cancer: is there a link?
In past decades, many scientists believed that higher levels of total testosterone came with an increased risk of prostate cancer. (Total testosterone is a measure of the total amount of testosterone circulating in your bloodstream â including testosterone thatâs bound to other compounds as well as testosterone that is unbound or âfree.â)
It was thought that low testosterone production actually helped protect against prostate diseases, so restoring testosterone to normal levels â through testosterone therapy â could mean a greater chance of prostate cancer.
So could prostate cancer be a potential risk of TRT?
Not likely, says todayâs researchers. While initial studies (first published in 1941) suggested a link between high T levels and prostate cancer, much more modern research â using much more rigorous methods â has convincingly shown that testosterone therapy comes with âlittle if any riskâ of prostate cancer.
Signs and symptoms of high testosterone in men
Men with high testosterone can experience a variety of troubling symptoms and possible health consequences. Excess testosterone can lead to more aggressive and irritable behavior, more acne and oily skin, even worse sleep apnea (if you already have it), and an increase in muscle mass. With too much testosterone pumping through your system, you may have a lower sperm count (due to decreased sperm production) and shrunken testicles.
High testosterone causes
Excess testosterone in men can result from testicular or adrenal tumors. Even if these tumors are benign â that is, they arenât malignant or cancerous â they can still boost testosterone levels to unhealthy levels, as can steroid use and abuse.
And if you donât treat your high testosterone levels? Elevated testosterone will raise your âbadâ cholesterol levels, and can thus lead to heart health issues â potentially resulting in a heart attack, cardiovascular disease, or stroke. Risk of sleep apnea and infertility is also heightened if you have high testosterone levels.
Some men actually have a genetic predisposition for developing high levels of testosterone. Studies show that these individuals that fall within this category are at a much higher risk for developing blood clots, heart disease, and a variety of other cardiovascular issues. Because of the severity of this issue, it is essential that men with high testosterone are tested and are aware of their potential risks.
Conclusion
Because high and low male testosterone levels are both disruptive to your health, itâs critical that your male sex hormones are within a healthy range. But how can you check your testosterone to see if you have a deficiency or excess? You can easily measure your testosterone with the Everlywell Testosterone Test or the Men's Health Test, and discuss any unhealthy levels with your healthcare provider. | So could prostate cancer be a potential risk of TRT?
Not likely, says todayâs researchers. While initial studies (first published in 1941) suggested a link between high T levels and prostate cancer, much more modern research â using much more rigorous methods â has convincingly shown that testosterone therapy comes with âlittle if any riskâ of prostate cancer.
Signs and symptoms of high testosterone in men
Men with high testosterone can experience a variety of troubling symptoms and possible health consequences. Excess testosterone can lead to more aggressive and irritable behavior, more acne and oily skin, even worse sleep apnea (if you already have it), and an increase in muscle mass. With too much testosterone pumping through your system, you may have a lower sperm count (due to decreased sperm production) and shrunken testicles.
High testosterone causes
Excess testosterone in men can result from testicular or adrenal tumors. Even if these tumors are benign â that is, they arenât malignant or cancerous â they can still boost testosterone levels to unhealthy levels, as can steroid use and abuse.
And if you donât treat your high testosterone levels? Elevated testosterone will raise your âbadâ cholesterol levels, and can thus lead to heart health issues â potentially resulting in a heart attack, cardiovascular disease, or stroke. Risk of sleep apnea and infertility is also heightened if you have high testosterone levels.
Some men actually have a genetic predisposition for developing high levels of testosterone. Studies show that these individuals that fall within this category are at a much higher risk for developing blood clots, heart disease, and a variety of other cardiovascular issues. Because of the severity of this issue, it is essential that men with high testosterone are tested and are aware of their potential risks.
| no |
Urology | Can testosterone increase the risk of prostate cancer? | yes_statement | "testosterone" can "increase" the "risk" of "prostate" "cancer".. higher levels of "testosterone" can lead to an "increased" "risk" of "prostate" "cancer". | https://aacrjournals.org/cebp/article/15/1/86/258197/Circulating-Steroid-Hormones-and-the-Risk-of | Circulating Steroid Hormones and the Risk of Prostate Cancer ... | Abstract
Epidemiologic studies have failed to support the hypothesis that circulating androgens are positively associated with prostate cancer risk and some recent studies have even suggested that high testosterone levels might be protective particularly against aggressive cancer. We tested this hypothesis by measuring total testosterone, androstanediol glucuronide, androstenedione, DHEA sulfate, estradiol, and sex hormone–binding globulin in plasma collected at baseline in a prospective cohort study of 17,049 men. We used a case-cohort design, including 524 cases diagnosed during a mean 8.7 years follow-up and a randomly sampled subcohort of 1,859 men. The association between each hormone level and prostate cancer risk was tested using Cox models adjusted for country of birth. The risk of prostate cancer was ∼30% lower for a doubling of the concentration of estradiol but the evidence was weak (Ptrend = 0.07). None of the other hormones was associated with overall prostate cancer (Ptrend ≥ 0.3). None of the hormones was associated with nonaggressive prostate cancer (all Ptrend ≥ 0.2). The hazard ratio [HR; 95% confidence interval (95% CI)] for aggressive cancer almost halved for a doubling of the concentration of testosterone (HR, 0.55; 95% CI, 0.32-0.95) and androstenedione (HR, 0.51; 95% CI, 0.31-0.83), and was 37% lower for a doubling of the concentration of DHEA sulfate (HR, 0.63; 95% CI, 0.46-0.87). Similar negative but nonsignificant linear trends in risk for aggressive cancer were obtained for free testosterone, estradiol, and sex hormone–binding globulin (Ptrend = 0.06, 0.2, and 0.1, respectively). High levels of testosterone and adrenal androgens are thus associated with reduced risk of aggressive prostate cancer but not with nonaggressive disease. (Cancer Epidemiol Biomarkers Prev 2006;15(1):86–91)
Introduction
Although it is established that sex steroid hormones, particularly androgens, are essential to the growth, development, and maintenance of healthy prostate epithelium, and to the progression of prostate cancer, epidemiologic studies have thus far failed to show that high levels of circulating androgens increase the risk of developing prostate cancer—the “androgen hypothesis.” A review of 10 prospective epidemiologic studies where blood had been sampled before diagnosis of prostate cancer (1) found that there was no evidence that serum levels of endogenous sex hormones and their binding protein [sex hormone–binding globulin (SHBG)] were associated with the risk of developing prostate cancer. There was only a slightly increased risk associated with high levels of androstanediol glucuronide that was of marginal statistical significance. Of the 10 studies reviewed, only one (2) reported positive associations between androgen concentrations (testosterone and androstanediol glucuronide) and the risk of prostate cancer and, incidentally, inverse associations with estradiol and SHBG levels. Many reasons for the lack of evidence to support the “androgen hypothesis” have been offered in explanation, including laboratory measurement error in hormone assays and the heterogeneity of prostate cancer phenotypes, a problem that has been compounded in recent years by prostate-specific antigen (PSA) testing (3).
Some of these issues have been addressed in most of the five recent prospective studies that were published after Eaton's review (4-8). Interestingly, in three of these studies (6-8), the risk of prostate cancer was reduced in men with higher levels of testosterone although none of the estimates were statistically significant. Suggestive evidence from two studies (6, 7) led us to hypothesize that high testosterone levels decrease the risk of aggressive prostate cancer. We also hypothesized that estrogens and adrenal androgens might be inversely associated with aggressive prostate cancer.
We tested these hypotheses by analyzing a number of steroid hormones and related molecules measured in blood samples taken at baseline from men enrolled in the Melbourne Collaborative Cohort Study.
Materials and Methods
Subjects and Case-Cohort Design
The Melbourne Collaborative Cohort Study is a prospective cohort study of 41,528 people (17,049 men) ages between 27 and 75 years at baseline (99.3% of whom were ages 40-69 years). Recruitment occurred between 1990 and 1994 in the Melbourne metropolitan area. Details of the study have been published elsewhere (9, 10). The Human Research Ethics Committee of the Cancer Council Victoria approved the study protocol. Subjects gave written consent to participate and for the investigators to obtain access to their medical records.
A case-cohort design was used for studies that included the analysis of plasma. All men first diagnosed with prostate cancer between baseline and June 30, 2002, were eligible, as was a random sample (hereafter called the subcohort) of 2,167 men from the cohort. The study was designed to have the same power as a nested case-control study with two controls per case; preliminary analysis based on a method by Wacholder (11) suggested the subcohort needed to have 3.6 times as many members as there were cases of prostate cancer. For this analysis, men were excluded if they had a confirmed diagnosis of prostate cancer before baseline (n = 106 in the full cohort and 9 in the subcohort).
Case Ascertainment
Addresses and vital status of the subjects were determined by record linkage to Electoral Rolls, Victorian death records, the National Death Index, from electronic phone books, and from responses to mailed questionnaires and newsletters. Cases were ascertained by record linkage to the Victorian Cancer Registry, the population registry that covers the region in which the cohort resides. Between baseline attendance and June 30, 2002, 279 men in the full cohort had left Victoria and 1,257 had died.
A total of 614 men were diagnosed with prostate cancer over an average of 8.7 person-years of follow-up between 1990 and mid-2002. Seventy-five of these cases were members of the subcohort. Classification of cases as aggressive and nonaggressive was made on the basis that only cases with a distant-stage or poorly differentiated tumor have excess mortality compared with the general population (12). Prostate cancer was, therefore, defined as “aggressive” if the Gleason score was higher than 7 or if it was classified as poorly differentiated. Cases with stage T4 or N+ (positive lymph nodes) or M+ (distant metastases) were classified as aggressive irrespective of the Gleason score or grade of tumor differentiation. Nine cases had no blood collected at baseline and were therefore excluded, leaving 605 cases eligible for this study.
Assessment of Circulating Levels of Steroid Hormones
To study the relationship between disease and steroid sex hormones and other biological markers, each participant had blood collected at baseline of which 2 mL plasma was stored in liquid nitrogen. Hormone measurements were not made for 367 men (81 cases) because they had insufficient plasma left, a few samples were contaminated, and one batch of samples was not retrieved from storage. Therefore, hormone measurements and the statistical analysis were made for only 1,859 members of the subcohort (86%) and 524 case subjects (85%). There were no statistically significant differences in either demographics (age at baseline, year of attendance, country of birth, education, and smoking and alcohol consumption) or tumor characteristics (stage and Gleason score) between the men who had their hormones measured and those who did not.
Plasma samples were retrieved from storage, aliquoted into 450 μL amounts, and shipped on dry ice in batches of ∼80 samples each to the laboratory of one of us (H.A. Morris), where SHBG, testosterone, estradiol, androstanediol glucuronide, androstenedione, and DHEA sulfate (DHEAS) were to be measured. Assignment to batches was done randomly and the proportions of cases and subcohort members were approximately equal for all batches. Ten percent of the samples in each batch were aliquots from pooled plasma that had been stored with the samples from participants. The laboratory was blind to status of the samples. One scientist did all measurements.
Samples were thawed in a warm water bath, vortexed rapidly for a few seconds, and centrifuged at 2,000 rpm (210 × g) for 10 minutes. Total PSA was measured by microparticle enzyme immunoassay (AXSYM analyzer, Abbott Laboratories, Abbott Park, IL) with an interassay coefficient of variation (CV) at 0.4 ng/mL of 9.5%. DHEAS was measured by competitive immunoassay (IMMULITE analyzer, DPC, Los Angeles, CA) with a CV at 2.1 μmol/L of 12.4%. Testosterone followed by estradiol was measured by electrochemiluminescence immunoassay (Elecsys 2010 analyzer, Roche Diagnostics GmbH, Mannheim, Germany) with a CV for testosterone at 36 nmol/L of 1.6% and estradiol at 93 pmol/L of 11.1%. SHBG was measured by immunometric assay (IMMULITE analyzer, DPC) with a CV at 26 nmol/L of 6%. Androstenedione and androstanediol glucuronide were analyzed by RIA (DSL-4200 and DSL-6000, respectively; TX) with a CV for androstenedione at 3.3 nmol/L of 10.7% and androstanediol glucuronide at 21.1 nmol/L of 4.3%.
Before the study began, a reliability study was done. Plasma samples from 44 men who had given blood twice ∼1 year apart were each divided into two aliquots. The two aliquots were measured in separate batches a week apart. As a measure of reliability, we used the intraclass correlation, which is the proportion of the total variance due to variation between persons, where the total variance included components due to between persons, between-sampling occasions, and between-laboratory runs.
Statistical Analysis
Steroid hormone levels were categorized into quartiles according to the distribution of the values for the subcohort. Quartiles were assigned within each laboratory batch to adjust for any variation between batches. Tests for linear trend were based on pseudocontinuous variables under the assumption that all subjects within each quartile had the same concentrations equal to the within-quartile median. The pseudocontinuous variables were log2 transformed before inclusion in the models so that the hazard ratio (HR) would represent the relative difference in risk associated with a doubling of the concentration.
Free testosterone and free estradiol were calculated from the total concentration and from the concentration of SHBG using the law of mass action (13) under the assumption of a fixed albumin concentration of 40 g/L. For this calculation, we used the association constants of de Ronde et al. (14). The ratio estradiol/testosterone was calculated as an indicator of feminization.
Cox proportional hazards regression models, with age as the time axis (15), were used to estimate HR values and 95% confidence intervals (95% CI). We used the Prentice method to take the case-cohort sampling into account and the robust method was used to calculate the variance-covariance matrix (16, 17). Follow-up for a subcohort member began at baseline and ended at diagnosis of prostate cancer or cancer of unknown primary site, death, the date last known to be in Victoria, or June 30, 2002, whichever came first. To estimate HR values for nonaggressive and aggressive cases and to test their difference, we fitted stratified Cox models based on competing risks using a data duplication method (18).
Tests based on Schoenfeld residuals (19) showed no evidence that proportional hazard assumptions were violated for any of the hormones. In a sensitivity analysis to investigate the possible effect of prevalent prostate cancers, we tested whether the HR values differed before and after the first 2 years of follow-up (15).
The Melbourne Collaborative Cohort Study was specifically designed to be a multiethnic cohort and the Italian and Greek communities were “oversampled” to obtain a wider range of exposures. Analyses were, therefore, adjusted for country of birth (Australia/New Zealand, United Kingdom, Italy, and Greece). Adjustments for smoking status, alcohol consumption, education, body mass index, and energy intake did not appreciably change the HR values, so these variables were not included in final analyses. To study the influence of simultaneous adjustment for all measured hormones, we included them in a single model. Free testosterone and estradiol and the ratio estradiol/testosterone were not included simultaneously with total testosterone and estradiol because of their high correlations.
Statistical analyses were done using Stata/SE 8.2 (Stata Corporation, College Station, TX). Because the robust method was used to calculate the variance-covariance matrix, the Wald test, not the likelihood ratio test, was used to test hypotheses. All P values were two-sided and P < 0.05 was considered as statistically significant.
Results
For the case subjects included in the analysis, the mean age at diagnosis was 67 years (range, 47-80 years) and 88 (17%) had aggressive cancer (i.e., had Gleason score >7 or had extraprostatic invasion: T4 or N+ or M+). Table 1 shows baseline characteristics of the cases and the subcohort. About 72% of men in the subcohort were born in Australia, New Zealand, or the United Kingdom and 27% in Italy or Greece.
Table 1.
Demographic characteristics and hormone levels of subjects (cases and subcohort)
A tumor was classified as aggressive if Gleason score was higher than 7 or if stage was advanced (T4 or N+ or M+). We were not able to define aggressiveness for six cases because Gleason score and tumor stage were not available (clinical diagnoses only).
†
The number of missing measures were 25 for PSA, 24 for total testosterone, 47 for DHEAS, 2 for SHBG, 8 for androstenedione, 81 for total estradiol, and 200 for androstanediol glucuronide.
‡
Derived from the total concentration (bound + free), concentration of SHBG using the law of mass action under the assumption of a fixed albumin concentration of 40 g/L (5.77 × 10−4 mol/L), and the association constants of de Ronde et al. (14).
Table 2 shows HR values for prostate cancer by quartiles and the test for linear trend across the quartiles for each hormone, adjusted for country of birth. There was little evidence that levels of androgens influenced overall risk of prostate cancer. The risk of prostate cancer was ∼30% lower for a doubling of the concentration of estradiol (HR, 0.71; 95% CI, 0.50-1.03) but the evidence was weak (Ptrend = 0.07). The HR values for quartiles II to IV relative to the first quartile of estradiol were all between 0.68 and 0.73 and significantly less than unity.
Table 2.
Relative risk of prostate cancer by quartile of hormone levels
.
Quartile I*
.
Quartile II
.
Quartile III
.
Quartile IV
.
Ptrend†
.
.
.
HR‡ (95% CI)
.
HR (95% CI)
.
HR (95% CI)
.
.
T
Reference
1.30 (0.97-1.75)
1.09 (0.80-1.48)
1.09 (0.80-1.47)
0.9
DHEAS
Reference
0.84 (0.64-1.10)
0.95 (0.71-1.26)
0.82 (0.58-1.15)
0.3
SHBG
Reference
1.14 (0.83-1.57)
1.00 (0.72-1.38)
0.92 (0.67-1.26)
0.3
A
Reference
0.92 (0.69-1.22)
0.88 (0.66-1.17)
0.92 (0.69-1.24)
0.5
E2
Reference
0.68 (0.50-0.93)
0.68 (0.50-0.92)
0.73 (0.55-0.98)
0.07
AG
Reference
0.91 (0.68-1.22)
0.83 (0.61-1.14)
0.87 (0.64-1.18)
0.3
E2/T
Reference
1.13 (0.83-1.54)
1.01 (0.75-1.37)
0.93 (0.69-1.27)
0.5
Free T
Reference
1.35 (1.02-1.79)
1.20 (0.89-1.61)
1.01 (0.74-1.38)
0.9
Free E2
Reference
0.95 (0.71-1.27)
0.83 (0.61-1.12)
0.90 (0.67-1.20)
0.4
.
Quartile I*
.
Quartile II
.
Quartile III
.
Quartile IV
.
Ptrend†
.
.
.
HR‡ (95% CI)
.
HR (95% CI)
.
HR (95% CI)
.
.
T
Reference
1.30 (0.97-1.75)
1.09 (0.80-1.48)
1.09 (0.80-1.47)
0.9
DHEAS
Reference
0.84 (0.64-1.10)
0.95 (0.71-1.26)
0.82 (0.58-1.15)
0.3
SHBG
Reference
1.14 (0.83-1.57)
1.00 (0.72-1.38)
0.92 (0.67-1.26)
0.3
A
Reference
0.92 (0.69-1.22)
0.88 (0.66-1.17)
0.92 (0.69-1.24)
0.5
E2
Reference
0.68 (0.50-0.93)
0.68 (0.50-0.92)
0.73 (0.55-0.98)
0.07
AG
Reference
0.91 (0.68-1.22)
0.83 (0.61-1.14)
0.87 (0.64-1.18)
0.3
E2/T
Reference
1.13 (0.83-1.54)
1.01 (0.75-1.37)
0.93 (0.69-1.27)
0.5
Free T
Reference
1.35 (1.02-1.79)
1.20 (0.89-1.61)
1.01 (0.74-1.38)
0.9
Free E2
Reference
0.95 (0.71-1.27)
0.83 (0.61-1.12)
0.90 (0.67-1.20)
0.4
*
The quartiles were assigned within each laboratory batch to adjust for any variation between batches.
†
HR values from Cox regression models adjusted for country of birth (Australia/New Zealand, United Kingdom, Italy, and Greece). The Prentice method has been used to take into account the case-cohort sampling (see Materials and Methods).
‡
The hypothesis of a linear trend in the HR was tested including in the model a pseudocontinuous variable computed assigning the median level of the specific hormone for each quartile.
Competing risk analyses showed that the linear trends in the HR values for testosterone, DHEAS, androstenedione, and free testosterone differed significantly between aggressive and nonaggressive cancers (P = 0.005 for androstenedione, 0.007 for DHEAS, 0.01 for testosterone, and 0.03 for free testosterone; Table 3). Although there was virtually no relationship between the incidence of nonaggressive prostate cancer and hormone levels, the risk of aggressive prostate cancer significantly decreased with increasing levels of testosterone, DHEAS and androstenedione (Ptrend between 0.005 and 0.03). For example, the risk almost halved with a doubling of the concentration of testosterone (HR, 0.55; 95% CI, 0.32-0.95) and androstenedione (HR, 0.51; 95% CI, 0.31-0.83), and was 37% lower with a doubling of the concentration of DHEAS (HR, 0.63; 95% CI, 0.46-0.87). The dose-response relationship for free testosterone was virtually identical to that observed for testosterone, the HR for a doubling of the concentration being 0.54 (95% CI, 0.29-1.01, Ptrend = 0.06). Similar, but not statistically significant, negative linear trends in risk for aggressive prostate cancer were observed for estradiol and SHBG (Ptrend = 0.2 and 0.1, respectively). The HR values did not change appreciably after removing the adjustment for country of birth or after further adjustment for baseline PSA values. The HR values relative to the first 2 years of follow-up did not significantly differ from the estimates relative to the rest of the follow-up (data not shown).
Table 3.
Relative risk of prostate cancer by quartile of hormone levels and by tumor aggressiveness
.
Quartile I*
.
Quartile II
.
Quartile III
.
Quartile IV
.
Ptrend†
.
P‡
.
.
.
HR§ (95% CI)
.
HR (95% CI)
.
HR (95% CI)
.
.
.
Nonaggressive cases
T
Reference
1.40 (1.01-1.93)
1.18 (0.85-1.64)
1.25 (0.90-1.72)
0.4
—
DHEAS
Reference
0.92 (0.69-1.23)
1.08 (0.79-1.47)
0.96 (0.67-1.38)
0.9
—
SHBG
Reference
1.25 (0.89-1.77)
0.98 (0.69-1.40)
1.01 (0.72-1.42)
0.6
—
A
Reference
1.04 (0.76-1.42)
1.04 (0.76-1.41)
1.09 (0.79-1.49)
0.6
—
E2
Reference
0.78 (0.56-1.08)
0.82 (0.60-1.13)
0.76 (0.55-1.05)
0.2
—
AG
Reference
0.94 (0.69-1.29)
0.81 (0.58-1.13)
0.90 (0.65-1.25)
0.4
—
E2/T
Reference
1.14 (0.83-1.59)
1.03 (0.75-1.43)
0.84 (0.60-1.17)
0.2
—
Free T
Reference
1.45 (1.07-1.96)
1.29 (0.93-1.78)
1.16 (0.83-1.63)
0.4
—
Free E2
Reference
1.01 (0.74-1.38)
0.87 (0.63-1.21)
0.94 (0.69-1.30)
0.6
—
Aggressive cases
T
Reference
0.96 (0.54-1.70)
0.67 (0.36-1.25)
0.53 (0.28-1.03)
0.03
0.01
DHEAS
Reference
0.53 (0.31-0.92)
0.54 (0.29-1.02)
0.38 (0.15-0.95)
0.005
0.007
SHBG
Reference
0.71 (0.36-1.39)
0.90 (0.48-1.70)
0.54 (0.28-1.04)
0.1
0.2
A
Reference
0.56 (0.31-1.00)
0.49 (0.27-0.88)
0.46 (0.24-0.88)
0.007
0.005
E2
Reference
0.40 (0.21-0.74)
0.24 (0.12-0.50)
0.63 (0.37-1.09)
0.2
0.5
AG
Reference
0.71 (0.38-1.36)
1.04 (0.56-1.91)
0.80 (0.41-1.55)
0.7
0.9
E2/T
Reference
0.93 (0.46-1.88)
0.91 (0.47-1.79)
1.47 (0.79-2.74)
0.2
0.06
Free T
Reference
1.00 (0.57-1.73)
0.81 (0.44-1.48)
0.50 (0.24-1.04)
0.06
0.03
Free E2
Reference
0.75 (0.41-1.38)
0.73 (0.39-1.37)
0.73 (0.39-1.36)
0.3
0.5
.
Quartile I*
.
Quartile II
.
Quartile III
.
Quartile IV
.
Ptrend†
.
P‡
.
.
.
HR§ (95% CI)
.
HR (95% CI)
.
HR (95% CI)
.
.
.
Nonaggressive cases
T
Reference
1.40 (1.01-1.93)
1.18 (0.85-1.64)
1.25 (0.90-1.72)
0.4
—
DHEAS
Reference
0.92 (0.69-1.23)
1.08 (0.79-1.47)
0.96 (0.67-1.38)
0.9
—
SHBG
Reference
1.25 (0.89-1.77)
0.98 (0.69-1.40)
1.01 (0.72-1.42)
0.6
—
A
Reference
1.04 (0.76-1.42)
1.04 (0.76-1.41)
1.09 (0.79-1.49)
0.6
—
E2
Reference
0.78 (0.56-1.08)
0.82 (0.60-1.13)
0.76 (0.55-1.05)
0.2
—
AG
Reference
0.94 (0.69-1.29)
0.81 (0.58-1.13)
0.90 (0.65-1.25)
0.4
—
E2/T
Reference
1.14 (0.83-1.59)
1.03 (0.75-1.43)
0.84 (0.60-1.17)
0.2
—
Free T
Reference
1.45 (1.07-1.96)
1.29 (0.93-1.78)
1.16 (0.83-1.63)
0.4
—
Free E2
Reference
1.01 (0.74-1.38)
0.87 (0.63-1.21)
0.94 (0.69-1.30)
0.6
—
Aggressive cases
T
Reference
0.96 (0.54-1.70)
0.67 (0.36-1.25)
0.53 (0.28-1.03)
0.03
0.01
DHEAS
Reference
0.53 (0.31-0.92)
0.54 (0.29-1.02)
0.38 (0.15-0.95)
0.005
0.007
SHBG
Reference
0.71 (0.36-1.39)
0.90 (0.48-1.70)
0.54 (0.28-1.04)
0.1
0.2
A
Reference
0.56 (0.31-1.00)
0.49 (0.27-0.88)
0.46 (0.24-0.88)
0.007
0.005
E2
Reference
0.40 (0.21-0.74)
0.24 (0.12-0.50)
0.63 (0.37-1.09)
0.2
0.5
AG
Reference
0.71 (0.38-1.36)
1.04 (0.56-1.91)
0.80 (0.41-1.55)
0.7
0.9
E2/T
Reference
0.93 (0.46-1.88)
0.91 (0.47-1.79)
1.47 (0.79-2.74)
0.2
0.06
Free T
Reference
1.00 (0.57-1.73)
0.81 (0.44-1.48)
0.50 (0.24-1.04)
0.06
0.03
Free E2
Reference
0.75 (0.41-1.38)
0.73 (0.39-1.37)
0.73 (0.39-1.36)
0.3
0.5
NOTE: A tumor was classified as aggressive if Gleason score was higher than 7 or stage was advanced (T4 or N+ or M+). We were not able to define aggressiveness for six cases because Gleason score and tumor stage were not available (clinical diagnoses only).
*
The quartiles were assigned within each laboratory batch to adjust for any variation between batches.
†
The hypothesis of a linear trend in the HR was tested including in the model a pseudocontinuous variable computed assigning the median level of the specific hormone for each quartile.
‡
Test for difference in the estimates for the pseudocontinuous variables (i.e., linear trend) between aggressive and nonaggressive cases.
The inclusion of all measured hormones in a single model did not appreciably change the HR values for overall prostate cancer (all Ptrend ≥ 0.09, data not shown). The inclusion of all measured hormones in the competing risk model widened 95% CI values and increased HR values associated with a doubling of the concentrations of testosterone, DHEAS, and androstenedione. The HR values for aggressive prostate cancer, however, remained well below unity: 0.75 (95% CI, 0.33-1.67) for testosterone, 0.70 (95% CI, 0.48-1.02) for DHEAS, and 0.67 (95% CI, 0.36-1.25) for androstenedione. The HR associated with a doubling of the concentration of estradiol increased to 0.99 (95% CI, 0.35-2.81). No HR for any other hormone was statistically significant after simultaneous adjustment.
Reliability and Quality Control
From the reliability study, the intraclass correlation for testosterone was 0.73 (95% CI, 0.60-0.86), for DHEAS 0.91 (95% CI, 0.86-0.95), for SHBG 0.88 (95% CI, 0.82-0.94), for androstenedione 0.46 (95% CI, 0.25-0.68), for estradiol 0.65 (95% CI, 0.41-0.88), for androstanediol glucuronide 0.84 (95% CI, 0.75-0.92), and for PSA 0.56 (95% CI, 0.35-0.76). For the pooled plasma samples, the overall CV was 7% for testosterone (4% within batches and 5% between batches), 10% for DHEAS (9% and 6%), 7% for SHBG (6% and 4%), 15% for androstenedione (11% and 9%), 10% for estradiol (8% and 6%), 10% for androstanediol glucuronide (9% and 5%), and 12% for PSA (8% and 10%).
Discussion
We found that prediagnostic circulating levels of testosterone and other androgens were associated with a reduced risk of aggressive, but not localized, prostate cancer. Further, levels of a major estrogen, estradiol, also seemed to be protective against aggressive disease. Our findings do not support the long prevailing “androgen hypothesis” that high levels of circulating androgens increase the risk of prostate cancer (3).
The main strengths of our study are its large size, high level of follow-up, and a large number of aggressive cases relative to other studies. To increase phenotype specificity compared with the other two studies that considered tumor aggressiveness (6, 7), we did not include T3 cases with Gleason scores of 7 or lower with the aggressive cases. Another strength is the quality of our hormone measurement as evidenced by high intraclass correlations and low CV values for pooled plasma samples. One weakness is the lack of information on family history of prostate cancer.
Historically, the lack of epidemiologic evidence for any association between circulating androgens and prostate cancer may have been due to grouping all prostate cancers as a single entity. This interpretation is supported by the lack of associations when all prostate cancers were analyzed together (see Table 1). Older epidemiologic studies may also have had problems with measuring various hormone levels adequately, which would have tended to attenuate risk estimates. However, none of the five cohort studies of hormones and prostate cancer risk published since Eaton's review (1) found evidence to support the androgen hypothesis (4-8). A nested case-control analysis of 300 cancers and 300 controls from the CARET study (6) showed that higher serum testosterone concentrations were not associated with increased risk [odds ratio (OR), 0.72; 95% CI, 0.45-1.14] and that OR values for higher concentrations of androstenedione, DHEAS, and androstanediol glucuronide were not significantly different from unity. A nested case-control study of 166 cases and 332 controls within a Finnish cohort study (5) showed no association between testosterone, SHBG, and androstenedione and prostate cancer risk: the relative risk comparing the highest and lowest quintiles of testosterone being 1.27 (95% CI, 0.67-2.37). The Massachusetts Male Aging Study measured serum levels of 17 hormones (4) and, comparing 70 cases of prostate cancer with the remaining 1,576 members, reported only one significant finding—a nonlinear association with androstanediol glucuronide levels.
Our findings are consistent with some recent well-conducted prospective studies that show that higher circulating levels of testosterone are associated not with an increased but with a decreased risk of prostate cancer (7, 8). In a case-control study of 708 cases and 2,242 controls nested in three Scandinavian cohort studies, men in the highest quintiles of serum levels of testosterone had a 20% lower risk of prostate cancer than men in the lowest quintile but the result was not statistically significant (OR, 0.80; 95% CI, 0.59-1.06; ref. 8). The most recent analysis, from the Health Professionals Follow-up Study (7), included 460 prostate cancer cases diagnosed in the PSA era (only 40 of which were regionally invasive or metastatic) and 460 age-matched controls that were PSA screened after their blood draw. Based on Gleason score, they categorized cases into low grade (<7) and high grade (≥7). Although no association was observed between plasma testosterone, DHT, androstanediol glucuronide, estradiol, or SHBG and total prostate cancer, a positive association with testosterone (top compared with bottom quartile) was observed for low-grade disease (OR, 1.91; 95% CI, 0.89-4.07; Ptrend = 0.02) and an inverse association was observed for high-grade disease (OR, 0.26; 95% CI, 0.10-0.66; Ptrend = 0.01). For high-grade disease, SHBG was positively associated with risk (OR, 2.72; 95% CI, 1.02-7.24) as was estradiol/testosterone (OR, 3.02; 95% CI, 1.29-7.04). An analysis of regionally invasive or metastatic disease, comparing top and bottom quartiles of testosterone, gave an OR of 0.48 (95% CI, 0.06-3.69).
Our finding that both DHEAS and androstenedione had similar associations to testosterone with the risk of aggressive prostate cancer is noteworthy as both of these weaker androgens can be converted by 17-β-hydroxysteroid dehydrogenase to testosterone (20). Further, androstenedione can be converted to DHT by first being converted by 5α reductase type 2 to androstanedione and then by 17-β-hydroxysteroid dehydrogenase to DHT (20). Interestingly, androstanediol glucuronide levels, which are supposed to reflect the conversion of testosterone to DHT (21), were not associated with the risk of either local or aggressive disease.
As discussed in a recent review, there are additional complexities that need to be considered to clarify the effect of hormones on prostate cancer risk (3); for example, the relevance of hormone measurements that are usually made on a single blood sample drawn in middle age, the relevance of circulating levels of hormones vis a vis intraprostatic levels, and the possibility that the hormonal milieu in utero and during puberty might be important to carcinogenic processes much later in life.
In conclusion, the results of our study contribute to the gathering evidence that the longstanding “androgen hypothesis” of increasing risk with increasing androgen levels can be rejected, suggesting instead that high levels within the reference range of androgens, estrogens, and adrenal androgens decrease aggressive prostate cancer risk. This evidence is consistent with the role that testosterone plays in the proper differentiation of prostatic epithelium and the rising incidence of prostate cancer with the androcline—as testosterone levels decline with increasing age, their control over differentiation similarly declines. A recent report on the Prostate Cancer Prevention Trial, an intervention trial to prevent prostate cancer using finasteride, a 5α reductase inhibitor, showed a decrease in the risk of all prostate cancer in the intervention arm together with an excess of high-grade cancers (22). Although the increased incidence of high-grade tumors in the group treated with finasteride might be due to a pathologic artifact (22), our results suggest that the drug, lowering androgens levels, might favor the development of aggressive prostate cancer.
Grant support: VicHealth and The Cancer Council Victoria (cohort recruitment); National Health and Medical Research Council grants 251533, 209057, and 126402; and The Cancer Council Victoria.
The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
Acknowledgments
We thank the original investigators and the diligent team who recruited the participants and who continue working on follow-up, the many thousands of Melbourne residents who continue to participate in the study, and Sonia Dunn for assistance with the measures in plasma. | Older epidemiologic studies may also have had problems with measuring various hormone levels adequately, which would have tended to attenuate risk estimates. However, none of the five cohort studies of hormones and prostate cancer risk published since Eaton's review (1) found evidence to support the androgen hypothesis (4-8). A nested case-control analysis of 300 cancers and 300 controls from the CARET study (6) showed that higher serum testosterone concentrations were not associated with increased risk [odds ratio (OR), 0.72; 95% CI, 0.45-1.14] and that OR values for higher concentrations of androstenedione, DHEAS, and androstanediol glucuronide were not significantly different from unity. A nested case-control study of 166 cases and 332 controls within a Finnish cohort study (5) showed no association between testosterone, SHBG, and androstenedione and prostate cancer risk: the relative risk comparing the highest and lowest quintiles of testosterone being 1.27 (95% CI, 0.67-2.37). The Massachusetts Male Aging Study measured serum levels of 17 hormones (4) and, comparing 70 cases of prostate cancer with the remaining 1,576 members, reported only one significant finding—a nonlinear association with androstanediol glucuronide levels.
Our findings are consistent with some recent well-conducted prospective studies that show that higher circulating levels of testosterone are associated not with an increased but with a decreased risk of prostate cancer (7, 8). | no |
Urology | Can testosterone increase the risk of prostate cancer? | yes_statement | "testosterone" can "increase" the "risk" of "prostate" "cancer".. higher levels of "testosterone" can lead to an "increased" "risk" of "prostate" "cancer". | https://www.healthline.com/health/prostate-cancer/testosterone-and-prostate-cancer | Testosterone and Prostate Cancer: What's the Connection? | Hypogonadism affects an estimated 2.4 million men over age 40 in the United States. By their 70s, one-quarter of men will have this condition.
Testosterone therapy can improve quality of life in men with low testosterone. However, it’s been a controversial practice since some research has suggested that testosterone fuels prostate cancer growth.
In the early 1940s, researchers Charles Brenton Huggins and Clarence Hodges discovered that when men’s testosterone production dropped, their prostate cancer stopped growing. The researchers also found that giving testosterone to men with prostate cancer made their cancer grow. They concluded that testosterone promotes prostate cancer growth.
As further evidence, one of the main treatments for prostate cancer — hormone therapy — slows cancer growth by lowering testosterone levels in the body. The belief that testosterone fuels prostate cancer growth has led many doctors to avoid prescribing testosterone therapy for men who have a history of prostate cancer.
In recent years, research has challenged the link between testosterone and prostate cancer. Some studies have contradicted it, finding a higher risk of prostate cancer among men with low testosterone levels.
A 2016 meta-analysis of research found no relationship between a man’s testosterone level and his risk of developing prostate cancer. Another review of studies showed that testosterone therapy doesn’t increase the risk of prostate cancer or make it more severe in men who have already been diagnosed.
According to a 2015 review in the journal Medicine, testosterone replacement therapy also doesn’t increase prostate specific antigen (PSA) levels. PSA is a protein that’s elevated in the bloodstream of men with prostate cancer.
Whether testosterone therapy is safe for men with a history of prostate cancer is still an open question. More studies are needed to understand the connection. The existing evidence suggests that testosterone therapy may be safe for some men with low testosterone who have successfully completed prostate cancer treatment and are at low risk for a recurrence.
Although the role of testosterone in prostate cancer is still a matter of some debate, other risk factors are known to affect your odds of getting this disease. These include your:
Age. Your risk for prostate cancer rises the older you get. The median age of diagnosis is 66, with the majority of diagnoses occurring in men between the ages of 65 and 74.
Family history. Prostate cancer runs in families. If you have one relative with the disease, you’re twice as likely to develop it. Genes and lifestyle factors that families share both contribute to the risk. Some of the genes that have been linked to prostate cancer are BRCA1, BRCA2, HPC1, HPC2, HPCX, and CAPB.
Race. African-American men are more likely to get prostate cancer and to have more aggressive tumors than white or Hispanic men.
While you can’t do anything about factors like your age or race, there are risks you can control.
Adjust your diet
Eat a mostly plant-based diet. Increase the amount of fruits and vegetables in your diet, especially cooked tomatoes and cruciferous vegetables like broccoli and cauliflower, which may be protective. Cut back on red meat and full-fat dairy products like cheese and whole milk.
Men who eat a lot of saturated fat have an increased risk of prostate cancer.
Eat more fish
Add fish to your weekly meals. The healthy omega-3 fatty acids found in fish like salmon and tuna have been linked to a reduced risk for prostate cancer.
Although doctors were once concerned that testosterone therapy might cause or accelerate prostate cancer growth, newer research challenges that notion. If you have low testosterone and it’s affecting your quality of life, talk to your doctor. Discuss the benefits and risks of hormone therapy, especially if you have a history of prostate cancer.
Last medically reviewed on October 30, 2017
How we reviewed this article:
Healthline has strict sourcing guidelines and relies on peer-reviewed studies, academic research institutions, and medical associations. We avoid using tertiary references. You can learn more about how we ensure our content is accurate and current by reading our editorial policy. | Some studies have contradicted it, finding a higher risk of prostate cancer among men with low testosterone levels.
A 2016 meta-analysis of research found no relationship between a man’s testosterone level and his risk of developing prostate cancer. Another review of studies showed that testosterone therapy doesn’t increase the risk of prostate cancer or make it more severe in men who have already been diagnosed.
According to a 2015 review in the journal Medicine, testosterone replacement therapy also doesn’t increase prostate specific antigen (PSA) levels. PSA is a protein that’s elevated in the bloodstream of men with prostate cancer.
Whether testosterone therapy is safe for men with a history of prostate cancer is still an open question. More studies are needed to understand the connection. The existing evidence suggests that testosterone therapy may be safe for some men with low testosterone who have successfully completed prostate cancer treatment and are at low risk for a recurrence.
Although the role of testosterone in prostate cancer is still a matter of some debate, other risk factors are known to affect your odds of getting this disease. These include your:
Age. Your risk for prostate cancer rises the older you get. The median age of diagnosis is 66, with the majority of diagnoses occurring in men between the ages of 65 and 74.
Family history. Prostate cancer runs in families. If you have one relative with the disease, you’re twice as likely to develop it. Genes and lifestyle factors that families share both contribute to the risk. Some of the genes that have been linked to prostate cancer are BRCA1, BRCA2, HPC1, HPC2, HPCX, and CAPB.
Race. African-American men are more likely to get prostate cancer and to have more aggressive tumors than white or Hispanic men.
While you can’t do anything about factors like your age or race, there are risks you can control.
Adjust your diet
Eat a mostly plant-based diet. | no |
Urology | Can testosterone increase the risk of prostate cancer? | yes_statement | "testosterone" can "increase" the "risk" of "prostate" "cancer".. higher levels of "testosterone" can lead to an "increased" "risk" of "prostate" "cancer". | https://utswmed.org/medblog/low-testosterone-symptoms-causes-treatment/ | How low testosterone treatment can help – and harm – a man's sex ... | How low testosterone treatment can help – and harm – a man's sex drive and fertility
On average, a testosterone level of 300–1,000 nanograms per deciliter (ng/dL) of blood is normal. But a healthy level really depends on your age, lifestyle, and bioavailable testosterone level – the unbound testosterone your body isn't using for daily functions.
If you listen to sports radio, it seems as if every other ad is pushing a new low testosterone (low-T) treatment: More energy! Bigger muscles! Better sex! All with a simple pill, shot, or gel!
When something sounds too good to be true, it usually is.
Many men and a surprising number of providers don't realize that taking exogenous (synthetic) testosterone or over-the-counter supplements may have harmful side effects if not administered properly. Tinkering with your testosterone levels without direction from a qualified specialist can cause other health issues, such as testicular atrophy, infertility, and an increased risk of prostate cancer.
An estimated 1 in 50 men have low-T and experience symptoms such as less energy, decreased libido (sex drive), erectile dysfunction, lack of concentration, or trouble sleeping. Around age 30, a man's testosterone levels may slowly begin to decline. Approximately 35% of men in their 70s have low-T, according to the American Urological Association.
But we're beginning to see more men in their 20s with low-T at the UT Southwestern male urology clinic. Sometimes low-T is caused by medical conditions, such as genetic diseases or past chemotherapy or radiation therapy. More often, symptoms can be linked to sedentary lifestyle, poor diet, anxiety, or depression.
So, before you call that low-T clinic or click on an outlandish ad for testosterone-boosting supplements, find out what's at stake for your health. There are safer, more cost-effective options to restore youthful energy – and potentially reverse fertility loss from previous testosterone products.
What's a normal testosterone level?
Testosterone is a natural hormone produced primarily in the testicles, and it helps men maintain everything from bone density and body hair to sex drive and sperm production. However, you don't have to hit a certain number or level to be "a real man," despite what the constant flow of ads may tell you. What matters is who you are and where you are in your life.
On average, a testosterone level of 300–1,000 nanograms per deciliter (ng/dL) of blood is normal. Hypogonadism – reduced testicular function – generally occurs when the total testosterone is less than 300 ng/dL. However, a healthy level for you depends on your age, lifestyle, and bioavailable testosterone level – the unbound testosterone your body isn't using for daily functions.
Unlike many low-T clinics, we calculate bioavailable testosterone by measuring levels of two proteins, sex hormone binding globulin and albumin, that typically bind to testosterone. It's possible to have a normal total testosterone level and experience low-T symptoms if this balance is off.
Having a normal bioavailable testosterone level tells us your body is making plenty and you likely won't benefit from testosterone replacement therapy. If your bioavailable testosterone level is low, we can discuss options.
Risks of off-the-shelf testosterone therapy
Avoid supplements over the counter. None are regulated or approved by the U.S. Food and Drug Administration (FDA), which means you can't verify what they're made with or whether they're safe, even if they come with a celebrity endorsement. Some testosterone supplements have been shown to cause health conditions such as erectile dysfunction or kidney failure.
Low-T clinics tend to overtreat, making blanket recommendations around the patient's total testosterone and not their individual health needs.
Your best bet is to see a board-certified urologist with expertise in hypogonadism, or a fertility expert who is experienced in treating male patients. A personal approach can help you avoid a range of complications such as:
Natural testosterone and sperm production is fueled by two hormones created in the pituitary gland of the brain: luteinizing hormone (LH) and follicle-stimulating hormone (FSH). When a man takes synthetic testosterone, the brain detects the excess and slows or stops production of LH and FSH. That means the body quits producing intratesticular (natural) testosterone, and consequently, production of sperm due to the suppression of LH and FSH. This results in reduced fertility.
Testicular atrophy
Little to no LH and FSH production, means no stimulation of the testicles. If the testicles aren't stimulated by these hormones, they may atrophy, or shrivel up. Testicular atrophy has been associated with long-term use of exogenous testosterone or over-the-counter steroids.
Development of male breasts
In some men, estrogen levels will increase as testosterone levels increase. Men naturally need some estrogen, one of the main sex hormones that women have, in the body for bone health and other body functions. But too much estrogen can cause conditions such as gynecomastia (male breast tissue). Excess estrogen can also cause sleep apnea, edema (swelling), and acne.
Inflated testosterone levels can increase risks to your prostate.
Increased risk of prostate cancer
I've had patients come to us from low-T clinics with testosterone levels as high as 3,000. That's unnecessary, and so much chemical modification increases the risk of enlarged prostate or increase the risk of prostate cancer. Even safe and moderate testosterone therapy bears a slightly increased risk.
Missed physical diagnoses
Low-T clinics typically don't screen for serious health conditions that can affect testosterone production. For example, patients may need bloodwork to measure prolactin, a hormone made by the pituitary gland that, in high levels, can be a sign of a pituitary tumor.
To follow a patient on testosterone replacement therapy, the provider should check your prostate-specific antigen (PSA), a natural protein that, in high levels, is associated with prostate cancer risk. You'll also need regular measurements of your hemoglobin (a blood protein) and hematocrit (red blood cells), which help carry oxygen through the body. An imbalance can indicate a serious medical issue, such as cancer, anemia, or kidney disease.
Untreated mental health issues
Often, we find that anxiety or depression – not hormonal imbalances – are the root cause of low-T-like symptoms. Suppressing your emotions can interfere with normal functions, such as focusing at work or maintaining an erection. If you truly have low-T, underlying stressors can make symptoms worse.
Everyone can benefit from an unbiased, listening ear now and then. I've referred many patients to see a therapist whose concerns resolve without testosterone therapy.
When testosterone therapy might help
Men who are no longer interested in conceiving may benefit from safe, monitored testosterone replacement therapy. Some patients with genetic issues that cause subfertility, such as Klinefelter's syndrome, may also benefit.
Losing weight through exercise and eating a healthy diet can help naturally improve testosterone levels.
Natural options
Lose a few pounds: Approximately 30% of obese men have low-T. Since muscle burns more calories than fat, the more muscle you have, the less likely your body is to store excess calories as fat. However, some research suggests that low-T contributes to weight gain – it's a vicious cycle. Start by cutting belly fat, which is good for your heart health and general wellness.
Eat a healthy diet: For full-body health and hormone balance, consider the Mediterranean diet, which focuses on lean proteins, healthy fats, and plant-based foods. Also enjoy foods that are high in vitamin D, which supports testosterone production, strong bones, and mood. Some of these include eggs, salmon, and mushrooms.
Rethink online viewing habits: Pornography is readily available online and it can perpetuate skewed expectations for what masculinity and sexual virility should feel like. Your body was not designed to perform sexually for hours on end – what you're seeing on the screen is cinematography, not reality.
Medication options
Clomiphene citrate pills: The drug Clomid binds to estrogen receptors in the brain that cause a negative feedback on testosterone production. The result is an increase in LH and FSH production. My mentor gave a great analogy for this process. It's like putting an ice pack on a thermostat to trick it into cranking out more heat. Clomid encourages the brain to make more LH and FSH, and therefore make more natural testosterone.
Injections: The hormone hCG can substitute for LH. Patients typically manage these short- or long-term therapies at home. We typically prefer not to prescribe testosterone injections for men in their late 20s and 30s, as these treatments are more likely to cause infertility. Exceptions would be individuals with a genetic problem that interferes with fertility or men who do not want to conceive.
Aromatase inhibitor: The drug Anastrozole, more commonly used in breast cancer treatment, blocks the conversion of testosterone to estrogen. We typically prescribe this medication when there is a pertinent need because it can drive the estrogen level too low, resulting in fragile bones.
Gels: This option offers varying degrees of absorption, and you must allow the gel to dry before getting dressed. Women should not make contact with the gel to avoid increasing their own testosterone level, which can cause side effects such as unwanted body hair.
Nasal spray: A new drug, Natesto, allows for at-home dosing of testosterone with less risk of fertility loss. The testosterone absorbs through the lining of the nose, and application takes about 10 seconds per dose, three times a day.
Patches: Similar to a nicotine patch, this option trickles testosterone into the system through the skin. Patches last about 24 hours and must be placed on an area of the body that is free of hair, oil, or irritation. It also can’t go over a bone or joint that will be disturbed by sitting, sleeping, or moving. You should not "reuse" a spot for at least seven days. Patches are not approved for age-related hypogonadism.
Testosterone pellets: We can implant testosterone pellets into the fatty tissue above the buttock area. The pellets hold crystalized testosterone, which releases into the body over four to six months.
Recovering fertility after testosterone therapy
Regaining fertility is not guaranteed, but it is possible for some patients depending on their age and duration of testosterone use. The first step is to end any testosterone therapy and get baseline lab tests to know where your levels truly are. Often, we find that the patient's LH production has been suppressed. In those cases, the next task is to increase it. Most patients start with Clomid. If that isn't sufficient, we may consider hCG injections.
In rare cases when neither therapy works, we can consider increasing the FSH level as well with injections of the FSH substitute hMG. This drug is more expensive, and success is not guaranteed.
If you are concerned about low-T symptoms, or if you've tried therapies that didn't work, talk with your primary care doctor or urologist. Feeling better starts with a conversation about your needs, goals, and lifestyle. Personalized care from a board-certified urologist or male fertility expert is the healthiest way to get there. | But too much estrogen can cause conditions such as gynecomastia (male breast tissue). Excess estrogen can also cause sleep apnea, edema (swelling), and acne.
Inflated testosterone levels can increase risks to your prostate.
Increased risk of prostate cancer
I've had patients come to us from low-T clinics with testosterone levels as high as 3,000. That's unnecessary, and so much chemical modification increases the risk of enlarged prostate or increase the risk of prostate cancer. Even safe and moderate testosterone therapy bears a slightly increased risk.
Missed physical diagnoses
Low-T clinics typically don't screen for serious health conditions that can affect testosterone production. For example, patients may need bloodwork to measure prolactin, a hormone made by the pituitary gland that, in high levels, can be a sign of a pituitary tumor.
To follow a patient on testosterone replacement therapy, the provider should check your prostate-specific antigen (PSA), a natural protein that, in high levels, is associated with prostate cancer risk. You'll also need regular measurements of your hemoglobin (a blood protein) and hematocrit (red blood cells), which help carry oxygen through the body. An imbalance can indicate a serious medical issue, such as cancer, anemia, or kidney disease.
Untreated mental health issues
Often, we find that anxiety or depression – not hormonal imbalances – are the root cause of low-T-like symptoms. Suppressing your emotions can interfere with normal functions, such as focusing at work or maintaining an erection. If you truly have low-T, underlying stressors can make symptoms worse.
Everyone can benefit from an unbiased, listening ear now and then. I've referred many patients to see a therapist whose concerns resolve without testosterone therapy.
| yes |
Urology | Can testosterone increase the risk of prostate cancer? | yes_statement | "testosterone" can "increase" the "risk" of "prostate" "cancer".. higher levels of "testosterone" can lead to an "increased" "risk" of "prostate" "cancer". | https://www.hopkinsmedicine.org/news/articles/2016/01/boosting-testosterone-not-shown-to-raise-prostate-cancer-risk | Boosting Testosterone Not Shown to Raise Prostate Cancer Risk ... | Boosting Testosterone Not Shown to Raise Prostate Cancer Risk
Does testosterone therapy raise your risk of getting prostate cancer or having a heart attack? For definitive answers, large-scale, long-term controlled studies are needed, says Arthur L. Burnett, M.D., the Patrick C. Walsh Distinguished Professor of Urology. However, in the meantime, results of a meta-analysis study led by Burnett suggest that, for prostate cancer at least, the risk is not changed by taking extra testosterone.
Testosterone therapy — boosting low testosterone with supplemental medication — is often prescribed for men with low blood levels of testosterone, for symptoms including reduced libido and sexual activity, fewer spontaneous erections, decreased energy and depressed mood. "But controversies surround the role of testosterone therapy, particularly with respect to prostate cancer and cardiovascular health risks, and these concerns have heightened recently," says Burnett.
In an effort to address the prostate cancer side of these worries, Burnett collaborated with Peter Boyle and colleagues of the International Prevention Research Institute in Lyon, France. They pored over data from about 20,000 men who participated in 24 population-based studies that evaluated the association between blood testosterone levels and the risk of prostate cancer. "We found that the risk of prostate cancer was neither increased or decreased among men with high levels of testosterone compared to lower levels," says Burnett. "Also, testosterone therapy was not found to increase prostate specific antigen (PSA) levels, or to promote the occurrence of prostate cancer." The meta-analysis was presented as a prize abstract selection at the Press Program of the American Urological Association 2015 Annual Meeting. Burnett hopes that these findings will be helpful to clinicians and patients who are worried that boosting low testosterone will cause prostate cancer to develop.
"Testosterone therapy was not found to increase PSA levels, or to promote the occurrence of prostate cancer." | Boosting Testosterone Not Shown to Raise Prostate Cancer Risk
Does testosterone therapy raise your risk of getting prostate cancer or having a heart attack? For definitive answers, large-scale, long-term controlled studies are needed, says Arthur L. Burnett, M.D., the Patrick C. Walsh Distinguished Professor of Urology. However, in the meantime, results of a meta-analysis study led by Burnett suggest that, for prostate cancer at least, the risk is not changed by taking extra testosterone.
Testosterone therapy — boosting low testosterone with supplemental medication — is often prescribed for men with low blood levels of testosterone, for symptoms including reduced libido and sexual activity, fewer spontaneous erections, decreased energy and depressed mood. "But controversies surround the role of testosterone therapy, particularly with respect to prostate cancer and cardiovascular health risks, and these concerns have heightened recently," says Burnett.
In an effort to address the prostate cancer side of these worries, Burnett collaborated with Peter Boyle and colleagues of the International Prevention Research Institute in Lyon, France. They pored over data from about 20,000 men who participated in 24 population-based studies that evaluated the association between blood testosterone levels and the risk of prostate cancer. "We found that the risk of prostate cancer was neither increased or decreased among men with high levels of testosterone compared to lower levels," says Burnett. "Also, testosterone therapy was not found to increase prostate specific antigen (PSA) levels, or to promote the occurrence of prostate cancer." The meta-analysis was presented as a prize abstract selection at the Press Program of the American Urological Association 2015 Annual Meeting. Burnett hopes that these findings will be helpful to clinicians and patients who are worried that boosting low testosterone will cause prostate cancer to develop.
"Testosterone therapy was not found to increase PSA levels, or to promote the occurrence of prostate cancer." | no |
Urology | Can testosterone increase the risk of prostate cancer? | no_statement | "testosterone" does not "increase" the "risk" of "prostate" "cancer".. there is no evidence to suggest that "testosterone" "increases" the "risk" of "prostate" "cancer". | https://pubmed.ncbi.nlm.nih.gov/24980615/ | Incidence of prostate cancer in hypogonadal men receiving ... | Abstract
Purpose:
Although there is no evidence that testosterone therapy increases the risk of prostate cancer, there is a paucity of long-term data. We determined whether the incidence of prostate cancer is increased in hypogonadal men receiving long-term testosterone therapy.
Materials and methods:
In 3 parallel, prospective, ongoing, cumulative registry studies 1,023 hypogonadal men received testosterone therapy. Two study cohorts were treated by urologists (since 2004) and 1 was treated at an academic andrology center (since 1996). Patients were treated when total testosterone was 12.1 nmol/l or less (350 ng/dl) and symptoms of hypogonadism were present. Maximum followup was 17 years (1996 to 2013) and median followup was 5 years. Mean baseline patient age in the urological settings was 58 years and in the andrology setting it was 41 years. Patients received testosterone undecanoate injections in 12-week intervals. Pretreatment examination of the prostate and monitoring during treatment were performed. Prostate biopsies were performed according to EAU guidelines.
Results:
Numbers of positive and negative biopsies were assessed. The incidence of prostate cancer and post-prostatectomy outcomes was studied. A total of 11 patients were diagnosed with prostate cancer in the 2 urology settings at proportions of 2.3% and 1.5%, respectively. The incidence per 10,000 patient-years was 54.4 and 30.7, respectively. No prostate cancer was reported by the andrology center. Limitations are inherent in the registry design without a control group.
Conclusions:
Testosterone therapy in hypogonadal men does not increase the risk of prostate cancer. If guidelines for testosterone therapy are properly applied, testosterone treatment is safe in hypogonadal men. | Abstract
Purpose:
Although there is no evidence that testosterone therapy increases the risk of prostate cancer, there is a paucity of long-term data. We determined whether the incidence of prostate cancer is increased in hypogonadal men receiving long-term testosterone therapy.
Materials and methods:
In 3 parallel, prospective, ongoing, cumulative registry studies 1,023 hypogonadal men received testosterone therapy. Two study cohorts were treated by urologists (since 2004) and 1 was treated at an academic andrology center (since 1996). Patients were treated when total testosterone was 12.1 nmol/l or less (350 ng/dl) and symptoms of hypogonadism were present. Maximum followup was 17 years (1996 to 2013) and median followup was 5 years. Mean baseline patient age in the urological settings was 58 years and in the andrology setting it was 41 years. Patients received testosterone undecanoate injections in 12-week intervals. Pretreatment examination of the prostate and monitoring during treatment were performed. Prostate biopsies were performed according to EAU guidelines.
Results:
Numbers of positive and negative biopsies were assessed. The incidence of prostate cancer and post-prostatectomy outcomes was studied. A total of 11 patients were diagnosed with prostate cancer in the 2 urology settings at proportions of 2.3% and 1.5%, respectively. The incidence per 10,000 patient-years was 54.4 and 30.7, respectively. No prostate cancer was reported by the andrology center. Limitations are inherent in the registry design without a control group.
Conclusions:
Testosterone therapy in hypogonadal men does not increase the risk of prostate cancer. If guidelines for testosterone therapy are properly applied, testosterone treatment is safe in hypogonadal men. | no |
Creationism | Can the 'big bang' theory and creationism coexist? | yes_statement | the '"big" "bang"' "theory" and "creationism" can "coexist".. it is possible for the '"big" "bang"' "theory" and "creationism" to "coexist". | https://www.salon.com/2014/10/28/pope_francis_believes_in_evolution_and_big_bang_theory_god_is_not_a_magician_with_a_magic_wand/ | "God is not a magician, with a magic wand": Pope Francis schools ... | The pontiff admits he believes in evolution and the Big Bang, says science and religion can peacefully coexist
Published October 28, 2014 4:50PM (EDT)
Shares
In an exciting declaration, Pope Francis I stated that God should not seen as a "magician with a magic wand," while unveiling a statue of his predecessor Pope Benedict XVI at the Pontifical Academy of Sciences. Pope Francis also stated that evolution and the Big Bang theory are both true and not incompatible with the church's views on the origins of the universe and life.
"When we read about Creation in Genesis, we run the risk of imagining God was a magician, with a magic wand able to do everything. But that is not so," Francis said, according to the Independent. Francis continued by stating that God "created human beings and let them develop according to the internal laws that he gave to each one so they would reach their fulfillment."
"The Big Bang, which today we hold to be the origin of the world, does not contradict the intervention of the divine creator but, rather, requires it," Francis explained. "Evolution in nature is not inconsistent with the notion of creation, because evolution requires the creation of beings that evolve."
While the pope's understanding of the origins of life still requires a divine force (rather than a scientific one), his views are a leap forward for the Catholic Church. Pope Francis is not the first pope to welcome these two scientific theories. However, the Catholic Church has a long reputation of being at odds with science, and Pope Francis' declaration is looked at as “trying to reduce the emotion of dispute or presumed disputes” between the church and science. | The pontiff admits he believes in evolution and the Big Bang, says science and religion can peacefully coexist
Published October 28, 2014 4:50PM (EDT)
Shares
In an exciting declaration, Pope Francis I stated that God should not seen as a "magician with a magic wand," while unveiling a statue of his predecessor Pope Benedict XVI at the Pontifical Academy of Sciences. Pope Francis also stated that evolution and the Big Bang theory are both true and not incompatible with the church's views on the origins of the universe and life.
"When we read about Creation in Genesis, we run the risk of imagining God was a magician, with a magic wand able to do everything. But that is not so," Francis said, according to the Independent. Francis continued by stating that God "created human beings and let them develop according to the internal laws that he gave to each one so they would reach their fulfillment. "
"The Big Bang, which today we hold to be the origin of the world, does not contradict the intervention of the divine creator but, rather, requires it," Francis explained. "Evolution in nature is not inconsistent with the notion of creation, because evolution requires the creation of beings that evolve. "
While the pope's understanding of the origins of life still requires a divine force (rather than a scientific one), his views are a leap forward for the Catholic Church. Pope Francis is not the first pope to welcome these two scientific theories. However, the Catholic Church has a long reputation of being at odds with science, and Pope Francis' declaration is looked at as “trying to reduce the emotion of dispute or presumed disputes” between the church and science. | yes |
Creationism | Can the 'big bang' theory and creationism coexist? | yes_statement | the '"big" "bang"' "theory" and "creationism" can "coexist".. it is possible for the '"big" "bang"' "theory" and "creationism" to "coexist". | https://scholarblogs.emory.edu/millsonph115/2014/09/22/evolution-and-god-can-coexist/ | Evolution and God Can Coexist | PH115: Introduction to Ethics | Evolution and God Can Coexist
In Christopher Bennett’s What is this thing called ethics?, Bennett discusses the positions of theists, atheists, and humanists. The concept of God and morality coinciding is a difficult process to grasp because there is no tangible proof of the existence of God. Although there are aspects of each position that I agree with, I support the theist position in regards to evolution and how the world came to be. Christopher Bennett, when speaking on behalf of the theists, made the claim that “we need to explain the very existence of the universe through there being a perfectly free and powerful being” (Bennett 116).
Charles Darwin’s theory of evolution explains how organisms evolved from natural selection and the survival of the fittest. Darwin never mentioned God in his theory nor did he explain why the process of evolution originally occurred. To understand the world in which we live in, we need to “point to a Being powerful enough to start the process of the universe’s development off” (Bennett 116). Science dates back to the big bang, but what happened before it? Bennett implies that there must be a figure behind the world’s creation.
Darwin’s theory is scientifically proven as true. Just because his theory is true, does not mean the existence of God is false. Author Stefan Lovgren argues that evolution and religion can coexist. He argues that evolution could be God’s tool in the creation of humans. Lovgren states, “it would be perfectly logical to think that a divine being used evolution as a method to create the world” (Lovgren). In other words, it makes sense that God would use evolution as a method because the ones that are most adapted to the environment survive. Evolution could be used to explain present life, but God could be the ultimate creator who used evolution as a tool (Snellenberger). If we are all God’s children, wouldn’t God want us all to be well adapted so we can survive and prosper?
Bennett also discusses the fact that God is the Designer. He states, “where there is a design, there must be a designer” (Bennett 115). To support this claim, he compares a watch and a chameleon. If a person were to find a watch on a deserted island, that individual would know that someone else created the intricate clockwork and the design of the watch. A chameleon, on the other hand, does not have a known designer. Conscious design was put behind the chameleon’s ability to change color to fit its surroundings. But whogave the chameleon this ability? (Bennett 115). Science cannot answer this question; religion can. God, Almighty, could be the mastermind behind this design.
The belief in God has no genuine proof like science has, which is why many people find it hard to accept that there is a god. The existence of God is not proved by facts, but rather by beliefs and faith. Through the theist’s argument, it is clear that one can support scientific theories while also having faith in god.
Works Cited
Bennett, Christopher. “Ethics and Religion.” What is this thing called ethics?. London: Routledge, 2010. 111-125. Print. | Evolution and God Can Coexist
In Christopher Bennett’s What is this thing called ethics?, Bennett discusses the positions of theists, atheists, and humanists. The concept of God and morality coinciding is a difficult process to grasp because there is no tangible proof of the existence of God. Although there are aspects of each position that I agree with, I support the theist position in regards to evolution and how the world came to be. Christopher Bennett, when speaking on behalf of the theists, made the claim that “we need to explain the very existence of the universe through there being a perfectly free and powerful being” (Bennett 116).
Charles Darwin’s theory of evolution explains how organisms evolved from natural selection and the survival of the fittest. Darwin never mentioned God in his theory nor did he explain why the process of evolution originally occurred. To understand the world in which we live in, we need to “point to a Being powerful enough to start the process of the universe’s development off” (Bennett 116). Science dates back to the big bang, but what happened before it? Bennett implies that there must be a figure behind the world’s creation.
Darwin’s theory is scientifically proven as true. Just because his theory is true, does not mean the existence of God is false. Author Stefan Lovgren argues that evolution and religion can coexist. He argues that evolution could be God’s tool in the creation of humans. Lovgren states, “it would be perfectly logical to think that a divine being used evolution as a method to create the world” (Lovgren). In other words, it makes sense that God would use evolution as a method because the ones that are most adapted to the environment survive. Evolution could be used to explain present life, but God could be the ultimate creator who used evolution as a tool (Snellenberger). If we are all God’s children, wouldn’t God want us all to be well adapted so we can survive and prosper?
Bennett also discusses the fact that God is the Designer. He states, “where there is a design, there must be a designer” (Bennett 115). To support this claim, he compares a watch and a chameleon. | yes |
Creationism | Can the 'big bang' theory and creationism coexist? | yes_statement | the '"big" "bang"' "theory" and "creationism" can "coexist".. it is possible for the '"big" "bang"' "theory" and "creationism" to "coexist". | https://www.prweb.com/releases/new_book_proves_god_caused_the_big_bang_and_the_creation_of_the_universe/prweb16143852.htm | New Book Proves God Caused the Big Bang and the Creation of the ... | New Book Proves God Caused the Big Bang and the Creation of the Universe
News provided by
LAVIDGE
05 Mar, 2019, 08:00 ET
Share this article
Share toX
Share this article
Share toX
SAN ANTONIO (PRWEB) March 05, 2019 -- Oftentimes people say the scripture of Christianity cannot coexist with scientific theory. Author Clare Raynard Magoon, Jr. debunks that notion by releasing his new book, “Creation and the Big Bang: How God Created Matter from Nothing,” a perfect blend of a scientific take on the Big Bang theory and a creationist point of view.
In the book, Magoon demonstrates through his research and through Scripture (especially Genesis 1:1) that God himself created the heavens and the earth and that this falls perfectly in line with the Big Bang theory and is the only explanation for how we got the universe from nothing.
“Creation and the Big Bang” also looks at new scientific discoveries and studies of the founding scientists who studied our origins, clearly demonstrating how the science giants (Bacon, Newton, Planck, Einstein etc) were all believers and sought after a creator behind the mystery of the cosmos.
“I’ve always been interested in science and theology and have always felt called to present my point of view and experiences to others,” Magoon said. “I want to reach out to friends, family and strangers who are looking for deeper explanations then those commonly presented by teachers and religious leaders.”
An Amazon reviewer praises “Creation and the Big Bang”: “This book successfully tied the Big Bang and Creation together, showing the harmony between the Biblical account of creation and the scientific account. Many new scientific terms were taught in layman's terms, simplifying a complex process. Believing and defending creation is now an easy task.”
Readers will be very intrigued in the central messaging to find out how Magoon synthesizes the widely believed Big Bang theory along with the scripture that proves God created the world.
“Creation and the Big Bang: How God Created Matter from Nothing” By Clare Raynard Magoon, Jr. ISBN: 9781973631316 (softcover); 9781973631323 (hardcover); 9781973631330 (electronic) Available at the WestBow Press Online Bookstore and Amazon
About the author Clare Raynard Magoon Jr. was born in Rockford and raised in Cedar Springs, Michigan. After attending the Moody Bible Institute for one semester, he joined the US Army, making it his career. Magoon was on active duty as a soldier for six years and then served as a civilian, working as an engineer and executive for an additional twenty-nine years. Traveling the world, he was able to observe a variety of cultures and religious practices throughout Europe, Asia, and North Africa, and Creation and the Big Bang draws on these experiences. To learn more, please visit https://www.claremagoon.com. | New Book Proves God Caused the Big Bang and the Creation of the Universe
News provided by
LAVIDGE
05 Mar, 2019, 08:00 ET
Share this article
Share toX
Share this article
Share toX
SAN ANTONIO (PRWEB) March 05, 2019 -- Oftentimes people say the scripture of Christianity cannot coexist with scientific theory. Author Clare Raynard Magoon, Jr. debunks that notion by releasing his new book, “Creation and the Big Bang: How God Created Matter from Nothing,” a perfect blend of a scientific take on the Big Bang theory and a creationist point of view.
In the book, Magoon demonstrates through his research and through Scripture (especially Genesis 1:1) that God himself created the heavens and the earth and that this falls perfectly in line with the Big Bang theory and is the only explanation for how we got the universe from nothing.
“Creation and the Big Bang” also looks at new scientific discoveries and studies of the founding scientists who studied our origins, clearly demonstrating how the science giants (Bacon, Newton, Planck, Einstein etc) were all believers and sought after a creator behind the mystery of the cosmos.
“I’ve always been interested in science and theology and have always felt called to present my point of view and experiences to others,” Magoon said. “I want to reach out to friends, family and strangers who are looking for deeper explanations then those commonly presented by teachers and religious leaders.”
An Amazon reviewer praises “Creation and the Big Bang”: “This book successfully tied the Big Bang and Creation together, showing the harmony between the Biblical account of creation and the scientific account. Many new scientific terms were taught in layman's terms, simplifying a complex process. Believing and defending creation is now an easy task.”
Readers will be very intrigued in the central messaging to find out how Magoon synthesizes the widely believed Big Bang theory along with the scripture that proves God created the world.
“Creation and the Big Bang: How God Created Matter from Nothing” By Clare Raynard Magoon, | yes |
Creationism | Can the 'big bang' theory and creationism coexist? | yes_statement | the '"big" "bang"' "theory" and "creationism" can "coexist".. it is possible for the '"big" "bang"' "theory" and "creationism" to "coexist". | https://science.howstuffworks.com/science-vs-myth/everyday-myths/god-science-co-exist.htm | Can God and science co-exist? | HowStuffWorks | Humans have debated the significance of God and science for centuries. To name just one example, they've battled over whether to teach creationism alongside or in place of evolution in U.S. public schools. People have taken sides; believers of science stand firmly on one side and followers of a higher power stay on the other. Yet, those on both sides might be surprised to learn that they can float between sides -- or switch teams entirely.
In his 2002 book, "Rock of Ages," paleontologist Stephen Jay Gould argued that religion and science can co-exist because they occupy two separate spheres of the human experience. Gould uses a term he previously coined, non-overlapping magisteria (NOMA), which is the concept that both religion and science have the authority to teach their respective dogma [source: Gould].
Advertisement
According to Gould, science and God are inherently divided and thus can easily co-exist in the human belief system. Science, he argues, answers questions of fact, while religion covers questions of morality.
While Gould's argument is valid, its attempt at reconciling God and science was quickly rejected by both atheists and religious adherents. The zoologist and atheist thinker Richard Dawkins called NOMA "an empty idea" and pointed that there are a number of areas where science and God compete for an individual's faith [source: Dawkins]. The debate over evolutionary theory is just one such flashpoint.
A 2009 study published in the Journal of Experimental Social Psychology suggests that humans can't reconcile two explanations as wildly different as creationism and evolution for their existence. The study found that, when exposed to descriptions of evolutionary theory that make clear that it's supported by science, participants were more susceptible to subliminal messages in support of the theory in a separate test later on. Conversely, those who had read that the theory "raised more questions than it answered" were less susceptible [source: Lloyd].
This study doesn't quite prove that science and religion are irreconcilable, though it adds to a body of work on the conflict thesis, a mid-19th century concept that holds that religion and science can't be reconciled.
Yet, several humans who subscribe to both faith in God and science show that the two can co-exist. Francis Collins, the founder of the Human Genome Project and a practicing Christian, is an excellent example. At a Pew Research forum, Collins pointed out several pieces of evidence of God's existence. He singled out concepts like the "unreasonable effectiveness of mathematics," an observation by physicist Eugene Wigner that math's most amazing quality is that it works so simply and elegantly [source: Pew Research].
Collins subscribes to the traditional tenets of evolutionary theory, beginning with the Big Bang, but has reconciled it with a belief in God. He believes that God created the Big Bang with the intent to create. Collins isn't alone; a poll taken in 1996 found that 40 percent of scientists say they believe in God [source: Bloom]. That was about the same percentage of Americans who said they believe in the theory of evolution in a 2009 Gallup poll. Twenty-five percent of Americans responded that they don't believe in evolution [source: Newport]. Perhaps it's the third group, the 36 percent of people who had no opinion either way, who represent the part of society where religion and science can co-mingle, or at least not be at odds.
Certainly, the existence of Francis Collins and people like him is evidence that God and science can co-exist, at least within the individual. Within society, that co-existence may be harder to find, especially as more individuals increasingly choose one or the other. | God and science was quickly rejected by both atheists and religious adherents. The zoologist and atheist thinker Richard Dawkins called NOMA "an empty idea" and pointed that there are a number of areas where science and God compete for an individual's faith [source: Dawkins]. The debate over evolutionary theory is just one such flashpoint.
A 2009 study published in the Journal of Experimental Social Psychology suggests that humans can't reconcile two explanations as wildly different as creationism and evolution for their existence. The study found that, when exposed to descriptions of evolutionary theory that make clear that it's supported by science, participants were more susceptible to subliminal messages in support of the theory in a separate test later on. Conversely, those who had read that the theory "raised more questions than it answered" were less susceptible [source: Lloyd].
This study doesn't quite prove that science and religion are irreconcilable, though it adds to a body of work on the conflict thesis, a mid-19th century concept that holds that religion and science can't be reconciled.
Yet, several humans who subscribe to both faith in God and science show that the two can co-exist. Francis Collins, the founder of the Human Genome Project and a practicing Christian, is an excellent example. At a Pew Research forum, Collins pointed out several pieces of evidence of God's existence. He singled out concepts like the "unreasonable effectiveness of mathematics," an observation by physicist Eugene Wigner that math's most amazing quality is that it works so simply and elegantly [source: Pew Research].
Collins subscribes to the traditional tenets of evolutionary theory, beginning with the Big Bang, but has reconciled it with a belief in God. He believes that God created the Big Bang with the intent to create. Collins isn't alone; a poll taken in 1996 found that 40 percent of scientists say they believe in God [source: Bloom]. That was about the same percentage of Americans who said they believe in the theory of evolution in a 2009 Gallup poll. Twenty-five percent of Americans responded that they don't believe in evolution [source: Newport]. | yes |
Creationism | Can the 'big bang' theory and creationism coexist? | no_statement | the '"big" "bang"' "theory" and "creationism" cannot "coexist".. it is not possible for the '"big" "bang"' "theory" and "creationism" to "coexist". | http://www.greatfallstribune.com/story/life/2015/03/01/intelligent-design-evolution-can-share-classroom/24240423/ | Intelligent design, evolution can share a classroom | Intelligent design, evolution can share a classroom
Clayton Fiscus, a new Republican member of the Montana House of Representatives, claimed that there doesn't have to be one. He has put forth a bill that would force public schools to teach intelligent design, or the belief that a supreme being created the universe, along with the traditional evolutionary theory.
No one was present at the conception of the universe, no matter what kind of birth it was. Therefore, we cannot discredit any theories about the beginning of the universe.
As a Christian, it is very important to me to be able to learn about how the Bible fits in with scientific theory. I was fortunate enough to attend Foothills, a school that taught me more about this and encouraged me to carry out my own research. Through this endeavor, I was able to discover that one doesn't have to leave science in the dust to be a believer.
In fact, when you examine Genesis and compare it to some of the scientific theories in place, especially the Big Bang Theory, it fits very well. The Bible says that God created the world from nothing, and the Big Bang theory says that all matter was created from a single point.
However, the Big Bang theory has some gaps. There is no explanation for where the matter that condensed into a single point came from initially, and the gravity involved defies the laws of physics. The idea of an intelligent God who created our universe fills in these gaps very well.
That's only one example, but intelligent design does mesh very well with science in general, even non-Christian intelligent design. All religions claim that a supreme being created the earth, and teaching intelligent design would encompass all of these beliefs.
While some may argue that Sunday school is the appropriate place to learn about intelligent design and creationism, they're not taking into account the people who either do not have access to or don't feel as if they can attend Sunday school. For example, atheist or intolerant parents may never give their children the chance to learn about God, or the children in question may never have considered religion before.
Teaching intelligent design in schools may help these children to understand that there is more than one theory about the beginning of the universe, and the Big Bang theory is just that — a theory — as intelligent design is.
What the issue really comes down to is whether teaching intelligent design is constitutional.
As civil liberties expert Tom Head comments, "it depends on how you interpret the First Amendment's establishment clause."
If you choose to interpret it as saying that church and state must always be completely separated, then it would only make sense to leave intelligent design out of a school curriculum. However, if you choose to say that it means that nonpreferential religious doctrine can be taught alongside purely scientific theory, then it is completely logical to teach intelligent design.
It's vital that we encourage different theories about creation, as well as different ways of looking at the world. It's time to live up to those Coexist bumper stickers, America.
Quincy Balius is a freshman at Cascade High School and a member of the Tribune's Teen Panel. | Intelligent design, evolution can share a classroom
Clayton Fiscus, a new Republican member of the Montana House of Representatives, claimed that there doesn't have to be one. He has put forth a bill that would force public schools to teach intelligent design, or the belief that a supreme being created the universe, along with the traditional evolutionary theory.
No one was present at the conception of the universe, no matter what kind of birth it was. Therefore, we cannot discredit any theories about the beginning of the universe.
As a Christian, it is very important to me to be able to learn about how the Bible fits in with scientific theory. I was fortunate enough to attend Foothills, a school that taught me more about this and encouraged me to carry out my own research. Through this endeavor, I was able to discover that one doesn't have to leave science in the dust to be a believer.
In fact, when you examine Genesis and compare it to some of the scientific theories in place, especially the Big Bang Theory, it fits very well. The Bible says that God created the world from nothing, and the Big Bang theory says that all matter was created from a single point.
However, the Big Bang theory has some gaps. There is no explanation for where the matter that condensed into a single point came from initially, and the gravity involved defies the laws of physics. The idea of an intelligent God who created our universe fills in these gaps very well.
That's only one example, but intelligent design does mesh very well with science in general, even non-Christian intelligent design. All religions claim that a supreme being created the earth, and teaching intelligent design would encompass all of these beliefs.
While some may argue that Sunday school is the appropriate place to learn about intelligent design and creationism, they're not taking into account the people who either do not have access to or don't feel as if they can attend Sunday school. For example, atheist or intolerant parents may never give their children the chance to learn about God, or the children in question may never have considered religion before.
| yes |
Paleo Diet | Can the Paleo diet cause thyroid problems? | yes_statement | the "paleo" "diet" can "cause" "thyroid" "problems".. following the "paleo" "diet" can lead to "thyroid" "problems". | https://primehealthdenver.com/how-to-lose-weight-with-hypothyroidism/ | 6 Steps to Lose Weight with Hypothyroidism | 2. Eat Anti-Inflammatory Foods
We’ve made a list of foods to eat and avoid, but the bottom line is that foods that feed inflammation need to be removed from your meals. Clearing up inflammation through diet can help your thyroid function properly.
At PrimeHealth, we recommend following the Autoimmune Paleo (AIP) diet to those diagnosed with hypothyroidism (especially when autoimmunity is a trigger) for between 1-6 months.
Eating this way has two main advantages:
The AIP diet is designed to eliminate inflammatory foods that can trigger the root causes of hypothyroidism (called Hashimoto’s thyroiditis).
A favorite alcohol alternative of ours is Feel Free, a plant-based tonic that provides improved mood, sustained energy, and decreased anxiety, without the negatives of alcohol. 40% OFF your first order with code primehealth40.
3. Intermittent Fasting
To stabilize your weight (or lose more), you may want to try time-restricted eating (also called intermittent fasting or IMF).
Here are the basics:
Choose a specified time window in your day during which you’ll eat.
The end of this window should be no fewer than 2 hours before bedtime, which will protect you from blood sugar spikes while sleeping.
There should be at least 12 hours between the last bite of food one day and your first bite the next day.
Extending your fasting window to 14 or even 16 hours may even be more beneficial when trying to lose weight.
If you suffer from hypoglycemia or diabetes and cannot increase your fasting window safely, talk to your doctor before starting this kind of eating plan. Women also need to consider the impact of fasting on menstrual cycles, so talk to your healthcare provider if you notice a significant change.
4. Stress Relief
Stress is a leading contributor to an underactive thyroid. Relieving stress can lead to a normal balance in your hormone levels. If your thyroid is normalized, your metabolism will speed up again.
And outside of hypothyroidism, stress has been linked to obesity in general. This is primarily due to exposure to excessive amounts of cortisol over time.
Turn off technology before bed. Cutting out blue light exposure an hour before bedtime can improve your sleep quality. You can do this by using software that blocks blue light, like f.lux and iristech.
Use blue light-blocking glasses. We also encourage everyone to use blue light-blocking glasses after sunset if using any electronic devices, which helps to reduce cortisol production. Use code PRIMEHEALTH for 10% off of our favorite blue light-blocking glasses at Ra Optics.
Meditate. Practicing meditation is another way to alleviate your stress and decrease excessive cortisol production.
Exercise is a fantastic natural treatment for hypothyroidism, helping to:
Relieve depression
Increase energy levels
Boost self-esteem
Reduce joint pain
Increase muscle mass
Improve insulin resistance
Lose unwanted weight
Maintain healthy weight
How much weight should you lose with hypothyroidism? Determine your healthy weight by starting with a normal BMI range for your height. Every person is different, so seek medical advice from your doctor about exactly how much weight you should try to lose.
6. Supplements
Here are the best natural dietary supplements to lessen the severity of your hypothyroidism, promote healthy metabolism, and promote hypothyroidism-related weight loss:
Iodine is essential to healthy thyroid function. Iodine deficiency is the leading cause of goiters, a thyroid disorder. Iodine is important in preventing autoimmune diseases, such as Hashimoto’s, that lead to hypothyroidism. Recent research reveals iodine’s weight loss potential in thyroid patients.
Selenium is found in fish and muscle meats. As a supplement, researchers found that selenium improved biomarkers of hypothyroidism. Selenium also improves mood, which can relieve stress. Both stress relief and improved thyroid function speed up your metabolism.
Glutathione is the most abundant antioxidant in the human body. In supplement form, glutathione fights oxidative stress that leads to inflammation. 2018 research reveals that glutathione promotes weight loss in certain obese individuals.
Metabolism is how fast your cells turn nutrients into energy. The slower the process, the fewer calories you burn at rest and during activity. With a slow metabolism, more stored calories turn to fat tissue.
Weight gain can be a vicious cycle. Once you have put on weight due to slow metabolism, it can be more difficult to:
Exercise the proper amount (2 ½ hours per week for most people)
Get motivated to work out or adhere to a diet
Move around as much as you used to during the day (also called “accidental exercise”)
Fatigue
Hypothyroidism leads to fatigue. Fatigue is one of the primary symptoms of an underactive thyroid, as your cells need thyroid hormones to make energy.
Fatigue leads to less physical activity, which can lead to weight gain.
Although over-the-counter medications like ibuprofen and acetaminophen can relieve pain, they come with all sorts of side effects. Fortunately, there are many natural ways to reduce hypothyroidism-related joint and muscle swelling so you can get moving again.
Thyroid hormone replacement, thyroid medication (like levothyroxine,) though it can also cause side effects), or anti-inflammatory supplements (like curcumin) can treat the root causes of joint and muscle swelling.
When your joints and muscles feel good, physical activity and a healthier weight are easier to achieve.
Depression
Another symptom of hypothyroidism is depression. Depression causes a lack of motivation to do anything, including exercise. This lack of exercise can lead to weight gain.
Most interestingly, if hypothyroidism is misdiagnosed as a mood disorder, the lithium medication that’s often prescribed can make thyroid problems worse. Effective diagnosis should include measuring thyroid hormone levels, as well as thyroid-stimulating hormone (TSH) levels.
Isolation fuels depression, as does the indoors. If you find yourself in a depressive state, try spending time with loved ones and enjoying the outdoors.
In Summary
It’s easy to feel overwhelmed when attempting to lose weight with hypothyroidism.
Not only does hypothyroidism directly cause weight gain, but the other symptoms contribute to bodyweight outside the normal range.
However, there’s hope with these natural tips to change your eating habits, relieve stress, exercise regularly, and use natural supplements.
Here at PrimeHealth, we have years of experience with hypothyroidism and weight loss. We focus on each patient as an individual and take the time you need to lose your unwanted weight from hypothyroidism.
At PrimeHealth in Denver, Colorado, we focus on disease prevention, from group medical visits to personalized healthcare. We’d love to offer you a free consultation;schedule your conversation today!
Share this Post
Dr. Soyona Rafatjah is a board-certified Family Medicine physician and Co-Founder and Medical Director of PrimeHealth. Dr. Rafatjah is passionate about helping her patients reach their health goals. She helps them address the root cause of their issues and focus on disease prevention in addition to treatment. With the knowledge and tools she gives them, her patients feel in control of their own health. | 2. Eat Anti-Inflammatory Foods
We’ve made a list of foods to eat and avoid, but the bottom line is that foods that feed inflammation need to be removed from your meals. Clearing up inflammation through diet can help your thyroid function properly.
At PrimeHealth, we recommend following the Autoimmune Paleo (AIP) diet to those diagnosed with hypothyroidism (especially when autoimmunity is a trigger) for between 1-6 months.
Eating this way has two main advantages:
The AIP diet is designed to eliminate inflammatory foods that can trigger the root causes of hypothyroidism (called Hashimoto’s thyroiditis).
A favorite alcohol alternative of ours is Feel Free, a plant-based tonic that provides improved mood, sustained energy, and decreased anxiety, without the negatives of alcohol. 40% OFF your first order with code primehealth40.
3. Intermittent Fasting
To stabilize your weight (or lose more), you may want to try time-restricted eating (also called intermittent fasting or IMF).
Here are the basics:
Choose a specified time window in your day during which you’ll eat.
The end of this window should be no fewer than 2 hours before bedtime, which will protect you from blood sugar spikes while sleeping.
There should be at least 12 hours between the last bite of food one day and your first bite the next day.
Extending your fasting window to 14 or even 16 hours may even be more beneficial when trying to lose weight.
If you suffer from hypoglycemia or diabetes and cannot increase your fasting window safely, talk to your doctor before starting this kind of eating plan. Women also need to consider the impact of fasting on menstrual cycles, so talk to your healthcare provider if you notice a significant change.
4. Stress Relief
Stress is a leading contributor to an underactive thyroid. Relieving stress can lead to a normal balance in your hormone levels. If your thyroid is normalized, your metabolism will speed up again.
| no |
Paleo Diet | Can the Paleo diet cause thyroid problems? | yes_statement | the "paleo" "diet" can "cause" "thyroid" "problems".. following the "paleo" "diet" can lead to "thyroid" "problems". | https://drbeckycampbell.com/hashimoto-diet/ | The Best Hashimoto Diet – Dr Becky Campbell | The Best Hashimoto Diet
Diet plays an integral role in every aspect of health, so it only makes sense that there is a specific diet for Hashimoto’s thyroiditis. Specific dietary strategies can greatly reduce symptoms, and even possibly eliminate the need for medications. Food is just one of the things we can take control of. It’s up to us to decide what we feed our body. If you suffer from Hashimoto’s dietary choices are even more important. I am going to share a Hashimoto diet plan with you to help get you on the right track with your thyroid health.
The Importance of Diet
Diet is not only important when you suffer from an autoimmune disease but in every other area of your health. When you make healthier dietary choices, you are less likely to suffer from certain health conditions. When you eliminate your known trigger foods, autoimmune conditions have the potential to go into remission.
With Hashimoto’s disease, eliminating your food triggers is such an important piece to healing from this condition. The last thing you want is to add more inflammation to your already inflamed thyroid.
Let’s take a look at the Hashimoto’s diet I recommend as well as some of the dietary changes I recommend all people with Hashimoto’s make.
Diet for Hashimoto’s Thyroiditis
While no one diet fits all, the diet I recommend my clients following when dealing with Hashimoto’s disease is a Paleo style diet. This dietary approach removes many of the commonly consumed inflammatory foods which allows the overactive immune system time to settle and the thyroid to heal.
A Paleo style diet works so well because it removes many of the foods that don’t mix well with this autoimmune thyroid condition. Let’s look at these foods more closely:
Gluten
Gluten is something that absolutely should be removed from the diet when dealing with Hashimoto’s
disease. Gliadin is a protein found in gluten which happens to resemble the thyroid gland once ingested. This is where problems come in with those who suffer from thyroid disease. When this protein passes through the gut lining and into the bloodstream, the body attacks this protein. Not only is the immune system on a path of destruction for gliadin, but for the thyroid as well.
Another problem with gluten is that the immune system can respond to gluten for up to 6 whole months after consumption! This is why it’s so important to completely eliminate gluten, and not just reduce it. With a Paleo diet, gluten is completely out of the question.
Dairy
Another food item eliminated from this Hashimoto’s diet approach is dairy. One of the many issues with dairy is the fact that cow’s milk contains different proteins than the proteins found in human milk which can be a huge issue for anyone who suffers from digestive issues which a large majority of those with autoimmune disease do. An immune response can be triggered by these foreign proteins and cause chaos in the body. When dealing with Hashimoto’s, it’s a good idea to eliminate dairy. Eliminating dairy can help heal the gut and the immune system at the same time.
What to Eat Instead of Dairy?
Almond milk, cashew milk, hazelnut milk, and hemp milk are fantastic dairy-free milk options. Make sure to buy organic, unsweetened varieties without additives or make your own. For healthy fats, try avocados, nut cheeses, or nut butter instead of cheese. If you are looking for a cheesy flavor without cheese and dairy, sprinkle a bit of nutritional yeast on your meals and salads.
Sugar
A Paleo style diet urges you to eliminate all processed and artificial sugars. This is great news for those with Hashimoto’s as sugar does not work well when trying to heal from this condition.
One of the many reasons sugar should be out of the question is the concern for balancing your blood sugar. Anyone who is looking to reduce or even eliminate Hashimoto’s symptoms will need to work hard to balance their blood sugar. The issue with blood sugar imbalances is that many people will consume large amounts of carbohydrates to feel better. When you consume carbs to try to boost your energy levels, you may be causing your blood sugar to spike too suddenly at the same time. It’s a vicious cycle you do not want to be a part of.
What to Eat Instead of Sugar?
When you first go sugar-free, it will be emotionally difficult for the first couple of weeks. The good news is that your body will adjust to this new way of eating. You will feel more energetic and healthy, and you won’t be missing sugar at all. If you want some sweetness in your life, low-glycemic index fruits, such as strawberries, raspberries, and blueberries, and sweet vegetables, such as beets, sweet potatoes, and carrots will satisfy your sweet tooth. For sweeteners, you can use a bit of monk fruit or stevia without disrupting your blood sugar levels.
The Paleo Diet Can Help
The Paleo Diet is a fantastic approach if you are dealing with Hashimoto’s or other thyroid conditions. The Paleo approach urges you to remove sugar, gluten, and dairy from your diet, and choose a more natural and healthier way of eating.
By following the Hashimoto’s diet based on Paleo principles to improve your Hashimoto’s, you will be consuming fewer calories, more protein, and more healthy fats to keep your blood sugar levels steady throughout the day. You will learn how to manage your blood sugar levels by using food as fuel. It seems so simple but you can feel better just by trying a few simple dietary modifications.
Final Thoughts
Health starts with the foods we choose to eat. The sooner you balance your diet, the faster you can balance your health and get well. Addressing your dietary choices is an important part of overcoming Hashimoto’s thyroiditis. Working with a functional medicine practitioner, like myself, who has extensive experience in thyroid health, can guide your journey and help you with a personalized Hashimoto’s diet using Paleo principles.
If you are dealing with symptoms of Hashimoto’s disease or other thyroid issues, I invite you to schedule a consultation with me. I can help to identify the root cause of your condition and recommend a personalized treatment plan to repair your body and regain your health and well-being. Schedule your consultation here.
EXPLORE THE RECIPES, THE STORIES, THE METHODS AND CHANGES TO GET YOU BACK WHERE YOU WANT TO BE.
DR. BECKY CAMPBELL
Hi, I am Dr. Becky Campbell. I work with men and women who’ve had a health set back and are willing to do whatever it takes to reach optimal health so they can perform their best in their careers and be fully present with their family again.
Understanding the Duration and Realistic Expectations of Following a Low Histamine Diet Living with histamine intolerance can be incredibly challenging. From the exhausting journey of obtaining a diagnosis to the ongoing management of symptoms, it’s a path that requires perseverance and resilience. As someone who has experienced a histamine intolerance diagnosis firsthand, I understand the
Content on this website is not considered medical advice. Please see a physician before making any medical or lifestyle changes. Naturopathic doctors are not licensed to practice in the State of Florida. Doctor's of Natural Medicine are not the same as a Naturopathic Doctor. | It’s a vicious cycle you do not want to be a part of.
What to Eat Instead of Sugar?
When you first go sugar-free, it will be emotionally difficult for the first couple of weeks. The good news is that your body will adjust to this new way of eating. You will feel more energetic and healthy, and you won’t be missing sugar at all. If you want some sweetness in your life, low-glycemic index fruits, such as strawberries, raspberries, and blueberries, and sweet vegetables, such as beets, sweet potatoes, and carrots will satisfy your sweet tooth. For sweeteners, you can use a bit of monk fruit or stevia without disrupting your blood sugar levels.
The Paleo Diet Can Help
The Paleo Diet is a fantastic approach if you are dealing with Hashimoto’s or other thyroid conditions. The Paleo approach urges you to remove sugar, gluten, and dairy from your diet, and choose a more natural and healthier way of eating.
By following the Hashimoto’s diet based on Paleo principles to improve your Hashimoto’s, you will be consuming fewer calories, more protein, and more healthy fats to keep your blood sugar levels steady throughout the day. You will learn how to manage your blood sugar levels by using food as fuel. It seems so simple but you can feel better just by trying a few simple dietary modifications.
Final Thoughts
Health starts with the foods we choose to eat. The sooner you balance your diet, the faster you can balance your health and get well. Addressing your dietary choices is an important part of overcoming Hashimoto’s thyroiditis. Working with a functional medicine practitioner, like myself, who has extensive experience in thyroid health, can guide your journey and help you with a personalized Hashimoto’s diet using Paleo principles.
If you are dealing with symptoms of Hashimoto’s disease or other thyroid issues, I invite you to schedule a consultation with me. I can help to identify the root cause of your condition and recommend a personalized treatment plan to repair your body and regain your health and well-being. | no |
Paleo Diet | Can the Paleo diet cause thyroid problems? | no_statement | the "paleo" "diet" does not "cause" "thyroid" "problems".. there is no evidence to suggest that the "paleo" "diet" "causes" "thyroid" "problems". | https://chriskresser.com/the-root-cause-of-thyroid-disorders-with-izabella-wentz/ | The Root Cause of Thyroid Disorders, with Izabella Wentz | RHR | The Root Cause of Thyroid Disorders, with Izabella Wentz
Hashimoto’s is the most common cause of thyroid disorders in the nation—and it can cause disruptive symptoms well before a conventional doctor would make a diagnosis or offer treatment. And that’s where the functional approach differs. In this episode of Revolution Health Radio, I talk with renowned thyroid specialist Izabella Wentz about how Functional Medicine can uncover the root cause of a thyroid disorder early on and help people feel better faster.
I’ve known Izabella for some time. I really respect her approach. I find it to be evidence-based and balanced. And we see eye to eye on a lot of topics related to thyroid and autoimmunity. I’m really excited to talk to her about her most recent book, which is Hashimoto’s Food Pharmacology. It looks at a food-based approach to addressing autoimmune dysfunction, specifically with Hashimoto’s. So I hope you enjoy the interview as much as I did. Let’s dive in.
Chris Kresser: Izabella, thanks so much for joining us. It’s a pleasure to have you on the show.
Izabella Wentz: Thank you so much for having me, Chris. I’ve been a long-time listener. So excited to be on.
Chris Kresser: Great. So let’s start with a little bit about your background and your story. I know a bit about it myself, but some of my listeners might not be familiar with it. So, how did you end up doing this work? How did you come to this?
Izabella’s Experience with Hashimoto’s
Izabella Wentz: So, I became the Thyroid Pharmacist as a result of my own health journey. I am, full disclosure, I was never interested in the thyroid gland during pharmacy school. I thought it was a very boring condition where you just gave somebody thyroid hormone if they had an underactive thyroid, and you suppressed their thyroid hormone production if they have an overactive thyroid. And little did I know a lot of the symptoms I actually was having in pharmacy school were related to my thyroid condition.
It wasn’t until a few years after graduation, and these symptoms just kept building up, and every year I had more and more symptoms, is when I pursued some further testing and found out I had Hashimoto’s. And I wanted to become the healthiest person I could be with Hashimoto’s and perhaps find some ways to slow down the progression of the condition over some of its symptoms. And that’s sort of how I became a Hashimoto’s expert/human guinea pig, was really trying to get myself better with some of these lifestyle interventions that were just not the standard of care at the time when I was diagnosed.
And as a result of my own health journey and getting my own health back, I’ve been able to work with other people who had very similar symptoms. It’s amazing how a lot of the things that helped me ended up helping them and just kind of deepened my knowledge from that point on. So I’ve been doing this work since, really, started about 10 years ago with my own diagnosis.
Chris Kresser: Well, let’s talk a little bit more about that diagnosis, because I’m always curious about people’s interaction with the conventional system. And it’s a little different because you were a practitioner yourself. But did you see a physician? And what did they test for initially? Did they diagnose you as Hashimoto’s? Or did you figure that out yourself through your own reading?
Because, as you well know, a lot of people go into the doctor and the doctor will just run a TSH test and that’s it. And if it’s “normal,” and I’m doing air quotes here, let’s say it’s 4.25, which is considered to be normal in the conventional system, that’s the end of the story. But of course you can’t diagnose Hashimoto’s with just a TSH test, which is what most people get. So how did that happen for you?
Izabella Wentz: Wow, yeah, I mean, for, I think, the average person, it takes about 10 years to be diagnosed. And I would say for me was quite similar. I started having symptoms probably in childhood because I was exposed to Chernobyl, and then I started having, my mom, actually, was a pediatrician, and she tested my thyroid function when I was a teenager because she thought my thyroid gland looked swollen.
The conventional approach gets a lot wrong when it comes to thyroid disorders. In this episode of RHR, I talk about how the functional approach can identify and treat the root cause of a thyroid disorder. #wellness #chriskresser #functionalmedicine
Chris Kresser: Right.
Izabella Wentz: And at that point, my TSH was normal, so she took me to an endocrinologist, pediatric endocrinologist of some sort. Then I started having symptoms again in my first year of undergrad, and I was depressed and just had all these weird things going on that were not like me. And I ended up on, I was a good girl, I was a pre-healthcare student, so I was always going to the clinic and doing all the things that you’re supposed to be doing.
And I would just always come back and they’d say, “No, everything was normal. Everything’s normal.” I was exhausted. At one point I was found to have Epstein-Barr virus when I was, I think, a sophomore in college. They said, “Oh you’re recovering from that. That’s why you’ve been tired the last year.” I was like, “Oh, great. That would’ve been good to know last year.”
Chris Kresser: Which is interesting because that’s one potential trigger of Hashimoto’s and autoimmunity.
Izabella Wentz: It’s a trigger or exacerbating factor for so many people that we just really don’t appreciate it. I think we do, you and I do.
Chris Kresser: It’s not widely known, that’s right. So you’re in college, you’re going back to the clinic, but still at this point they’re just looking at TSH or maybe T4 and T3. And has anyone tested your antibodies yet at this stage?
Izabella Wentz: No, definitely not. And then it was, like, every year after graduation I developed, or after undergrad, then I developed irritable bowel syndrome. And then they said, “Oh, it’s because pharmacy school is so stressful,” which it was, of course.
Chris Kresser: Right.
Izabella Wentz: And then it was acid reflux and then it was allergies and then it was hair loss and then brain fog. And I was going back and everything was normal. “No, you’re not anemic, your thyroid is fine.” At one point I got a hold of my thyroid test results, and it was like your TSH is 4.5, everything is fine.
Chris Kresser: Normal.
Izabella Wentz: Everything’s normal. And I was, like, in my mid-20s, and now I know, of course, like, I was like a sloth with a TSH of that number, and generally we want to have that around one for most people in their 20s. Then it was, I just kept going because I had all these issues, and I ended up going to an allergist because I was allergic to everything. And she was the one that found I had these high thyroid antibodies. And they were TPO antibodies but were over 2,000 at that point.
Chris Kresser: Wow, yeah. Yeah.
Izabella Wentz: Yeah, it was a long journey.
Chris Kresser: Right. And had somebody tested those seven or eight years before when these symptoms started, they might have been mildly elevated and there might’ve been an opportunity to intervene there and slow or stop or reverse the progression of that. Which is, of course, what we talk about a lot in Functional Medicine, right? This idea of catching a pathology at an earlier stage before it even manifests in a disease. But your experience is a great example of where that can fall down.
Izabella Wentz: Absolutely, because at the point where I was diagnosed, I was already in need of thyroid hormone. Had I been diagnosed five, 10 years ago, potentially I could’ve prevented the depression, the fatigue, the carpal tunnel, all these symptoms. And I could’ve maybe prevented the use of medications as well. But we know it’s a lot easier to prevent damage to an organ than to grow one back.
Chris Kresser: Absolutely, absolutely. And that’s, I mean, I feel like your experience in Hashimoto’s in general is such a good case example of the need for the Functional Medicine model. Because, as you explained, like the appearance of antibodies and then the body acting on that antibody production and attacking the gland usually precedes the development of actual hypothyroidism or clinical signs and symptoms by years, if not decades. And so, if you’re just waiting to see the high TSH and the low thyroid hormones, you’re missing years or, again, decades where you could be intervening and stopping the progression of that condition. And yet, antibody testing is not part of the standard of care still today.
Izabella Wentz: And it’s so backwards, because the antibodies come first before you see change in the thyroid hormone and TSH.
Chris Kresser: Yeah.
Izabella Wentz: And it’s so unfortunate, because any day you test the TSH, there might be some days where you have thyroid disease. But because you’re still fluctuating with the early stages of Hashimoto’s between hypo and hyper with a destruction of the thyroid gland, you’re going to have some thyroid hormone dumped into your system whenever the thyroid gland is being attacked. And you might test with a normal TSH on some days, with an elevated TSH on some days, and with a low TSH on some other days. So it’s kind of a luck of the draw unless you’re in the really far advanced hypothyroidism.
Chris Kresser: Stages of it. Yeah. I’m constantly reminding my patients and readers about this. That TSH, I remember a couple studies suggesting that you have to test it about 15 to 20 times to get a true average because it’s so variable from day to day. And then there are also some studies, as I know you know, that suggest that TSH has a diurnal rhythm like cortisol. So it fluctuates even throughout a day. So if you test it at different times during the day, you’ll get different values.
And so it’s so hard as you mentioned, for someone who’s trying to figure this out if you’re only relying on TSH because of that relapsing–remitting future of early-stage Hashimoto’s. I see it all the time in my patients, where we’re regularly testing the full thyroid panel. And they can bounce back and forth between hypothyroidism, normal thyroid function, and hyperthyroidism, even in that early stage when they’re in a flare. And it’s just, it’s really quite impossible to figure out if you’re only looking at TSH and you’re not testing free T4 and free T3 and the thyroid antibodies themselves.
Izabella Wentz: Yeah, I’m a big proponent of testing for thyroid antibodies. So TPO antibodies and TG antibodies, if you have any suspicion of thyroid disease and if you’re a woman. I just had my first baby last year.
Chris Kresser: Oh, congratulations.
The Thyroid–Fertility Connection
Izabella Wentz: Thank you so much. And I know for so many women, they have miscarriages or they’re unable to get pregnant because of thyroid disease or even thyroid antibodies. So I would say every woman of childbearing age should get those tests done. Anybody with any suspicion of thyroid disease, anybody with mental health symptoms. Because what’s I guess crazy, no pun intended here, but a lot of times people present with anxiety and depression as some of their first symptoms. And those might be the only symptoms they have for many years.
Chris Kresser: Yeah, yeah. You may know some of my own backstory with this. It’s not me personally, but my wife. When we were trying to conceive many years ago, we had trouble. About a year passed, and that’s not necessarily unusual, but at some point, we started to think we probably need to look into this. It’s not happening as quickly and easily as we had hoped. And so, me being me, I did this pretty deep dive into the research and I did some testing. And we found in my wife’s case, actually, she had TSI antibodies.
So this was a suggestion of not Hashimoto’s, but Graves’ disease. And there’s some crossover sometimes. She also had some TG antibodies, and there’s always some question about whether it’s Graves’ or Hashimoto’s. And in a way, it doesn’t matter when you’re looking at it from a functional perspective. Because it’s autoimmunity. And so we were going to address it from that perspective. And for her we did some herbs and we did, she was already on a really good diet and lifestyle. And then low-dose naltrexone in her case was a huge shift.
And then shortly after that, she conceived and then was able to deliver a healthy baby. So it can be a big deal. I mean, it can really make the difference between being able to conceive and carry a baby through to full term and not even being able to conceive in the first place. And it’s just, it’s sad that so many women are suffering with this and don’t even know it.
Izabella Wentz: Yeah, it’s incredibly sad to see people having the multiple miscarriages, potentially not being able to get pregnant in the first place. And I love that you utilized an integrative approach where you were using some of the best, the integrative medicine with low-dose naltrexone and Functional Medicine and nutritional lifestyle medicine as well. And I’m a big believer in using everything we have that’s out there, whether that’s thyroid hormones, whether that’s some innovative compounded meds and nutrition and whatever else we can to just try to get the person to feel their best.
Chris Kresser: Absolutely. I mean, my motto has always been whatever works and causes the least harm, or preferably no harm at all. And oftentimes, medication does not fit into that category. But sometimes it does, and it can really be the best option, or at least one of many options, in that case. So the other thing, of course, with—and it can happen in that situation—is a woman can become pregnant, can carry the baby through until full term, but then the baby has thyroid issues because of the mother’s Hashimoto’s.
Or we know that that can increase the risk of hypothyroidism and thyroid issues in the baby. And that will often, for the same reasons that we’ve been talking about, not be diagnosed until much later in life. They might have a subclinical low-level thyroid issue in childhood that could be responsible for a lot of their just kind of strange and mysterious health conditions.
Izabella Wentz: And another thing too, I feel like that’s super-underappreciated is a lot of times women will develop thyroid disease postpartum as well. And being a new mom, I’m sort of like, “Yeah, I could see how people would develop a thyroid condition from a Functional Medicine stress perspective, with the really long nights and being worried about your baby when he’s not sleeping and whatnot.” So that’s another really big common time that women will say, “Hey, I felt normal. I had a normal pregnancy. I had this beautiful child and here I am three, six, nine months later, and I just don’t feel like myself again. I’m exhausted. I’m anxious. I’m not losing the weight, or maybe I lost the weight very quickly.” And a lot of times it’s actually the thyroid that can, it can get out of whack right after a woman has a child.
Chris Kresser: Yeah. That seems to be an unfortunate consequence of pregnancy and delivering a baby. I can’t tell you how many patients I have that their onset of Hashimoto’s happened after the birth of a child. And the research suggests, and this makes sense, is there’s a profound immune shift that happens during the, particularly the second and third trimester. The first trimester shifts in one direction, then it shifts back in another direction. And then after birth it shifts again in another direction. So all that shifting back and forth, I think, can, if there are some predisposing factors there, can trigger the onset of Hashimoto’s.
So yeah, I agree, it’s a really, it’s a bummer, right? It doesn’t seem fair that something so amazing and beautiful as delivering a child could trigger that. But unfortunately, it often does happen. It’s one of the triggers for many women who experience this.
The Problem with Conventional Thyroid Tests
Let’s dive in to a little bit more discussion about the problems with the conventional approach to thyroid testing. Because I think a lot of people out there in your audience and in mine are somewhat familiar with this. But for newer listeners, I think it’s really important to get this across. Because as we’ve been talking about, so many people will go into the doctor and just get a TSH level. And we’ve talked a little bit about the issues with only testing for TSH. But let’s dive into actually the conventional range for TSH and why that’s problematic. You mentioned that your TSH was 4.5 and you were told that that’s normal. But what’s the problem with that?
Izabella Wentz: Normal is a setting on a washing machine.
Chris Kresser: Right.
Izabella Wentz: So what’s kind of interesting when I was initially doing the research into my own health and the research for my first book, it was very interesting to find out how that the “normal ranges” of TSH were determined. And it was through using just a bunch of different people’s blood and some of the people within that pool of blood, they actually had thyroid disease. And so the reference range became overly lax. And we had this huge range because people with thyroid disease actually happened to be in the group of “healthy people.” And so, to make a long story short, there’s this really big reference range where it really, if they just looked at the blood levels of healthy people without thyroid disease, the TSH should be somewhere between 0.5 and two.
And that’s not, like, all the other things we talked about with the changes throughout the day, as well as the different fluctuations when you have the early stages of the condition. So yeah, like, the reference ranges are just too lax. And oftentimes, I’ll have women that are like, “Oh, yeah, I’ve been tested for thyroid disease.” And I’m like, “You’re wearing a coat and it’s 90 degrees outside. Let me see your labs.”
Chris Kresser: You don’t have any eyebrows anymore and you left a trail of hair as you walked through the building. Yeah, no, it’s really crazy because this is, of course, not just for TSH. This is true for even other thyroid markers, but many other markers on a blood panel. And it’s crazy to me that so many lab ranges are built by studying sick people, or at least including people who have the disease that you’re trying to screen for in the sample. It just doesn’t make any sense at all.
And yet that’s what we have with TSH. And I know that there’s definitely still controversy about this, and there’s controversy about whether TSH, let’s say 2.5 when the thyroid hormones are normal is cause for concern. And because of the variability of TSH that we mentioned, if your thyroid antibodies are normal, your TSH, your free T3 and your free T4 are normal and you have a single TSH reading of 2.5, personally I don’t think that means, “Ah, you’ve got hypothyroidism. Ah, we need to do something about it right away.”
But it’s definitely something that I would watch out for and continue to look at. But if your TSH is 4.5 like yours was, Izabella, then that starts to, in my mind, be pretty significantly outside the range of what we see in healthy people.
Izabella Wentz: And the other thing is I was so symptomatic too.
Chris Kresser: Yes.
Izabella Wentz: And so I was, like, losing hair and I was forgetful, I was very sloth-like in my day-to-day activities. I was sleeping 12 hours a night, and I also had the really high antibodies. Had they been tested, that would’ve been found and that would’ve been a clear … I love that in Functional Medicine we have so many different options. When somebody has TSH that’s maybe slightly elevated, we can do some of these lifestyle things and wait a few months. And maybe that TSH will normalize.
In conventional medicine, it’s basically, “Oh, well, you have these antibodies and you have this slightly elevated TSH, but it’s not high enough for us to treat. So why don’t we just, like, wait until your thyroid gland burns itself out. Come back to me when … You’re feeling terrible now. Come back to me when you feel, like, more terrible.”
Chris Kresser: “Come back to me when your thyroid gland has been so destroyed by the antibody production that there’s nothing left to do but give you thyroid hormone,” is kind of what it looks like, right?
Izabella Wentz: Right, and as a pharmacist, I’m an advocate of actually using thyroid hormone earlier in the game because it’s one of the things that’s been shown to one, relieve symptoms, which is super important, but two, it also can slow down some of the progression of the condition. It’s not a cure by any means, but it can bring down that TSH a bit so that we’re not, I guess, causing so much attention to the thyroid gland from the immune system perspective. And it allows the thyroid gland to kind of chillax a little bit and bring down some of that inflammation.
Chris Kresser: Yeah. So let’s go back to the thyroid panel now. So, we know that the range for TSH is too broad and that when healthy people have been studied with a normal functioning thyroid, you see TSH between 0.5 and two or 2.2, depending on what studies you look at. Sometimes I will see panels from primary care providers or conventional practitioners that have TSH and total T4. So it’s a step up from just TSH, but then it’s total T4 only. So what’s the problem with that?
Izabella Wentz: I think that—and like I said, it’s a great step in the right direction—but at the same time I love to utilize free T4 and free T3 because that tells us how much thyroid hormone is actually available to interact with thyroid hormone receptors. The total T4 includes the thyroid hormone that can be bound up and not available for the body to use. And this could be for various reasons, with different types of hormone abnormalities, potentially stress response, nutrient deficiencies.
Chris Kresser: Right. And then you have the issue that T4 has to be converted into T3. And we know 93 or 94 percent of the hormone that’s produced by the thyroid gland is T4, which is relatively inactive. That has to be converted into T3 in order to, as you say, become biologically active and complete the mission of thyroid hormone. And one of the things that decreases the conversion of T4 to T3 the most is inflammation, which you would of course expect someone with an autoimmune inflammatory condition to have, right?
Izabella Wentz: Absolutely, and that’s one of the things that is also on my list of pet peeves with the conventional approach. Because a lot of times, just T4 medications are utilized for people when they have a thyroid condition. And people will say, “Oh, I feel a little bit better. Maybe I only need like 11 hours of sleep instead of 12.” But they’re still not converting the medication into the active hormone. So T4 is known as a “pro-hormone,” which means that it’s going to be, the body needs to do something with it to be the more active version.
And on paper the T4-to-T3 conversion happens perfectly every time. In the human body, not so much. But like you said, there’s so many different things, inflammation being one of them. Nutrient deficiencies, stress that can prevent the T4 to T3 conversion. One of the big things that I see is actually impaired, the liver. If we have impaired ability to detoxify and just … these are not things you can necessarily find on a conventional lab test. But there’s subclinical things where the person will just not be making enough of that T3 hormone.
And a lot of times I’ll advocate for using different types of thyroid hormone medications that contain not just T4 but also T3, and sometimes T2 and T1 to really ensure that the person has the best outcomes that they could have from medication therapy.
Chris Kresser: Absolutely. And for those that aren’t familiar, the T4 medication is levothyroxine, or Synthroid in the US. These are, as Izabella said, this is the standard of care. Like, this is what most patients will get if they are diagnosed with thyroid, hypothyroidism. And since Hashimoto’s, statistically speaking, is the number one cause of hypothyroidism in the US, we’re not talking about a rare thing here. We’re talking, like, most people who are diagnosed will have it.
And so Synthroid is actually going to be, it can work, no doubt, for some people. But for a lot of people, many people, perhaps most with hypothyroidism, they’re going to have a problem with that conversion. And then what happens is, and I’m sure you saw this as a pharmacist, where people would, the doctor has to prescribe higher and higher doses of levothyroxine because the conversion is not happening and the patient still feels bad. And then their TSH basically gets to zero. They’ve got high T4, but their T3, especially their free T3 might still be low and they might be really symptomatic. So they hit kind of a dead end there, don’t they?
Izabella Wentz: They become lopsided in their thyroid hormones, and they can have reactions from high T4, like different types of joint pains and pains in their muscles. And what’s unfortunate is that they’ll continue to have hair loss, they’ll continue to have weight gain, they’ll continue to have mood issues and brain fog. And a lot of times the conventional approach will say, “Okay, well, you’re still depressed and your thyroid is normal. So it cannot be your thyroid. You need to see a psychiatrist,” or, “You’re still losing hair. Well, it’s not your thyroid because the TSH is normal and you’re on thyroid hormone, so therefore go see, like, a dermatologist or whatnot.”
And then of course the wise psychiatrist and the wise dermatologist will say, “Hey, what about this T3 hormone?” What’s interesting is that T3 medications were used by some psychiatrists for treatment of refractory depression. And it’s kind of frustrating for the average person because they get a runaround from conventional medicine. You almost feel like you’re going crazy because nobody is validating what you’re going through and the experiences of people with thyroid conditions. I’ve talked to thousands of people with Hashimoto’s, and so many of them go through the same things.
And sometimes, honestly, for me, like when I work with a client, sometimes it’s like they just want to be heard. And sometimes I’m the very first person that says, “Yeah, that makes a lot of sense is that you’re feeling this way even though you’re doing X, Y, and Z.” And I see this time and time again.
Chris Kresser: Yeah, yeah, it’s, again, I mentioned this before and I’ll say it again, this is such a perfect example of the need for a functional approach. Because you had mentioned, we were talking about the problems with converting T4 to T3 and inflammation being one of the primary drivers. Well of course, Functional Medicine can help us get to the root cause of that inflammation, which is often in the gut, not always.
But we also know, you mentioned the liver, most of the T4-to-T3 conversion happens peripherally. Meaning not in the thyroid gland itself. It happens in the gut. It happens in the liver. It happens in the cells around the body. And I think about 20 percent of the conversion of T4 to T3 happens in the gut. So if somebody’s gut is not functioning well, that could actually cause a thyroid issue even if they’re producing enough thyroid hormone. The thyroid is making enough T4 but because of the inflammation and the gut issues, they’re still experiencing hypothyroid symptoms and low T3 because of what else is going on in the body. Sometimes this is referred to as low T3 syndrome. We’ve both talked and written about this a lot, but it highlights the need for a really comprehensive approach.
Another issue, of course, that you alluded to earlier is nutrient deficiency. So we know that zinc and selenium are required to convert T4 to T3. So, there’s so many things that need to be looked at here. And this is one of the deficiencies of the model, and you highlighted it so well, where instead of taking this more comprehensive approach with one person who can see the whole picture, someone gets referred to a psychiatrist and a dermatologist and a gastroenterologist and all these specialists, who are really just kind of looking at it through a very narrow lens and nobody’s putting all the pieces together.
Izabella Wentz: Yeah, I always joke that you go to see the left arm doctor and then you go to see the right arm doctor, and then they’re each just looking at one part of you, right?
Chris Kresser: Yeah, yeah. I mean, we can go on and on about this. I think most of the listeners are familiar with the issues that we’re talking about here. But I find that the Hashimoto’s in particular is such a good example of the ways that we need to improve our current system. Because the traditional approach is really, really failing patients.
Autoimmunity and Hashimoto’s
Okay, so we’ve talked about the issues with TSH. We’ve talked about why testing for TSH, and even just testing for total T4 and total T3 for that matter, are not enough, and we need to test free T4 and free T3. We talked about testing for thyroid antibodies and why that’s so important. But let’s dive even a little deeper on the autoimmune piece. Because let’s say somebody tests for—trick question here—thyroid antibodies and they’re normal. Can we then just say, “Okay, they’ve tested once for their thyroid antibodies, they’re normal, they absolutely don’t have Hashimoto’s”? Because that happens a lot too. We see this in the conventional setting. Somebody begs their practitioner to test for thyroid antibodies and they do it once, the person doesn’t have positive antibodies, and the practitioner says, “Okay, you don’t have Hashimoto’s.” What’s the problem with that?
Izabella Wentz: You know there’s also something called seronegative Hashimoto’s where, back in the day depending on what study you looked at, they would say that 80 to 90 percent of people with Hashimoto’s had one and/or the other antibody. And so there was 10 to 20 percent of people who could have Hashimoto’s without any of those antibodies.
Chris Kresser: Yep.
Izabella Wentz: But now there were more studies done through fine needle aspiration, which is an invasive procedure and is generally not done to test for Hashimoto’s. It’s generally done to look at thyroid nodules to test them for cancer, any kind of abnormalities. And when you use that procedure, what you can do is you can look at the cells within the thyroid gland under a microscope and you can tell if there were changes consistent with Hashimoto’s in those cells. And unfortunately or fortunately, this method can uncover additional cases of Hashimoto’s when even the thyroid antibodies are “normal.” And, again, normal conventional medicine, some labs will say if they’re under 100, they’re normal. Or if they’re under 35, they’re normal. If they’re under nine, they’re normal.
And you know what’s normal, again? It could be, I would say, under maybe one or two might be “normal,” but you also want to look at the thyroid ultrasound. Because we could see some of the changes consistent with Hashimoto’s on a thyroid ultrasound. And then with, as I mentioned the fine needle aspiration, we can find more cases of Hashimoto’s that way. But that means, or how could I put this? But that in order to find every case of Hashimoto’s we would have to look at every single thyroid cell under a microscope. So you’d have to remove the entire thyroid gland and look at it under a microscope to really rule out Hashimoto’s.
Chris Kresser: Yeah, and this goes back to what we were talking about before with how the lab range for TSH was developed is in the NHS cohort, the Nurses’ Health Study. They did try to eliminate people with known Hashimoto’s by—and also undiagnosed Hashimoto’s—by testing for thyroid antibodies. But as you just explained, that will not eliminate everybody with Hashimoto’s. In fact, quite a few people will be missed. So, and then antibodies fluctuate.
We know that there’s a relapsing and remitting characteristic of Hashimoto’s. Anyone who has Hashimoto’s knows this, that if they test their antibodies serially over time, they’ll see them go up and down, depending on what’s happening. And if you just do one test, you might catch someone on a good day, right? And their antibodies are normal. And then I think I also see this—sometimes practitioners will only test for TPO antibodies and not do thyroglobulin.
You mentioned that before, like, that some people have one antibody, not the other. And it’s not unusual for me to see someone who has normal TPO antibodies but very high thyroglobulin antibodies. And that person would be missed if only TPO is tested for.
Izabella Wentz: And to get even more nerdy, there are additional types.
Chris Kresser: We like nerdy here. So go ahead.
Izabella Wentz: There are additional types of thyroid antibodies that we don’t necessarily even test in the real world. They might be available to scientists.
Chris Kresser: Antibodies to thyroid hormone itself, which are not typically tested for. And then some of the Graves’ antibodies, and yeah.
Izabella Wentz: Iodine transporter antibodies, the list just goes on and on. And I don’t write about it on my blog, but I did, like, a healthcare professional presentation a few years back, and at that time I think there were, like, 16 different ones that were identified. I’d have to go back to my notes, but yeah. So you could have antibodies that haven’t even been described yet that’s a part of your thyroid physiology at this point.
Chris Kresser: Yeah, and so this is where we come back to—
Izabella Wentz: Good old symptoms.
Chris Kresser: Symptoms and also just research. Because it’s important, this is where, when we look, there are different kinds of evidence, right? There’s clinical evidence, there’s evidence that we look at from research. There’s anecdotal evidence. But when we look at the research, we see that, statistically speaking, Hashimoto’s is the most common cause of hypothyroidism in this country. In other parts of the developing world where iodine deficiency is still more prevalent, that is the number one cause. But here in the developed world, it’s Hashimoto’s.
And then if you rule out iodine deficiency and nutrient deficiency and other potential causes of hypothyroidism, and if the patient has symptoms that are consistent with autoimmune inflammatory disease, which we’ve talked about throughout the show, and maybe they have higher T4 and lower T3 and certain things which are known to trigger or exacerbate the immune system, make their condition worse, maybe they had a viral, they had mono, and their symptoms started after that, or maybe their symptoms started after they delivered a child, then you put those pieces together and you make a clinical diagnosis. You put all those together, even if you don’t have conclusive evidence through antibodies that it’s Hashimoto’s, it’s there, in the Functional Medicine this is what differentiates it from conventional.
Again, there’s no downside to treating it as an autoimmune condition, like, doing some diet changes and things like that, because we’re not using immunosuppressive drugs, like you might with Crohn’s disease or something like that. So in that case, the standard of proof needs to be higher. But with Hashimoto’s, if you’re going to go on an autoimmune protocol diet, to me that’s a sufficient, if you suspect it, that’s even one way to test the theory.
Izabella Wentz: Right. And it’s, yeah. I feel like we’re still learning so much about Hashimoto’s at this point, and a lot of times we need to trust people on what they think is going on with them. And a lot of times the recommendations we make from a Functional Medicine standpoint are going to be overall helpful to the body. So when we have a person with Hashimoto’s, I’m not necessarily thinking, “Hey, let’s suppress your immune system because it’s overactive.” I’m thinking, “What infections could you have that are setting you off? What foods are setting you off? What nutrient deficiencies do you have that we need to address so that your body will be able to balance itself better?”
And we’re really focusing on treating the person as a whole in their whole body and that might … what’s kind of, I think, wonderful about this is from a conventional medical standpoint when I used to be a regular pharmacist, we would give people medication to treat one condition and then we would bring on symptoms of another condition.
Chris Kresser: Well, there’s another medication for that.
Izabella Wentz: Yeah, like for people who are depressed, we’d give them Effexor. And then they get high blood pressure, which is a side effect of Effexor that most psychiatrists don’t really know about because they don’t test blood pressure in their patients.
Chris Kresser: Yeah.
Izabella Wentz: And so then the person would go back to their primary care doctor and they’d be like, “Oh, you have high blood pressure. Let’s give you a blood pressure medication for that.” And then you get a blood pressure medication for that and some of them cause you to have potentially fainting. I had one client who … I used to be a consultant pharmacist for people with disabilities. And a lot of times they weren’t able to advocate for themselves. So I was the one that was sent in when they were having all these issues.
And I had one client, no joken he started off on an antidepressant, then was given this blood pressure medication that caused him to have fainting because when he would stand up it would drop his blood pressure and heart rate too much. And so then he was put on a seizure medication, because people thought that he was having seizures. And the seizure medication caused some abnormal overgrowth in his gums. So then he was placed on another medication for that. And it was just like one thing after another, and that was traced back to one initial medication.
How to Use Food as Medicine for a Thyroid Disorder
Chris Kresser: Yeah, the medication treadmill, I call that. And it’s a very real thing. It’s why so many people over the age of 65 are taking five or more medications. So let’s talk a little bit about how to avoid that in the case of Hashimoto’s. Because this is, your first book was really focused on, the Hashimoto’s protocol was focused on reversing symptoms, but your new book, which is Hashimoto’s Food Pharmacology, looks more at food as medicine. And to me this makes a lot of sense. I agree with you a hundred percent, actually.
Often my patients are surprised to hear me say this that if you do have already, Hashimoto’s has progressed to the point where there’s been some destruction of the gland, and even at the very early stages, using thyroid hormone is just smart because it’s so crucial for so many cells in the body. And it can prevent or slow the further progression of Hashimoto’s, as you mentioned before. At the same time, in the conventional model, that’s the only thing that is done. So let’s talk about some of the other steps that people can take, particularly with food, to quell the inflammatory response that really is the root cause of the condition.
If we think of this from a Functional Medicine perspective, I’m always telling patients, “You really don’t have a thyroid problem. You have an autoimmune problem that’s affecting your thyroid.” So how do we look at, what are the biggest things for you in terms of food that can help people to kind of put the brakes on this inflammatory reaction?
Izabella Wentz: So, I love using food as medicine, and food pharmacology is just amazing, how the things that we put in our bodies can have such profound effects on us. It seems so easy, but at the same time when I was in pharmacy school, I just didn’t quite see that. I just thought medications had an effect. But we know that everything that we put in our body is going to send messages to it. And so with Hashimoto’s and just about every autoimmune condition, and I would say about every health condition, this is why I love using food so much, is because you’re not going to make one thing, generally you’re not going to make one thing worse by focusing on treating your Hashimoto’s with food.
And so for me, I really look at the patterns that most people with Hashimoto’s have, and I would argue to say that most people with autoimmunity have this too. One of them is going to be, which we talked about, micronutrient deficiencies. And so we want to figure out which nutrients are going to be deficient. And many times we can address this by eating a nutrient-dense diet. Sometimes we may need to add some enzymes to ensure that we’re removing the nutrients from these foods properly. And thyroid hormones can help with that as well, because if we have an underactive thyroid, then we might not be extracting nutrients properly. And then in some cases, we may need to add some supplemental nutrients as well.
Then we’re looking at macronutrient deficiencies. So a lot of times people with Hashimoto’s will have diets that are deficient in protein and fat. Sometimes this is a consequence of long-term deficiency and digestive enzymes where we just kind of, we find that we don’t feel so good after we eat protein or fat. So we just end up gravitating more towards carbohydrates. It also doesn’t help that most of our nutrition education comes from commercials nowadays. So it’s, like, make sure you, I know when I was in pharmacy school, I was shocked that carbohydrates were not, like, a requirement. I thought, like, fat was something that was optional because of all the commercials I watched for a low-fat diet.
Chris Kresser: Yes.
Izabella Wentz: Of course, I’ve already mentioned deficiencies in digestive enzymes. Blood sugar swings. This is something that’s crucial that can be addressed with proper nutrition, food. And then we have a toxic backlog. A lot of times we find that the foods that we’re eating can be contributing to that toxic load and not helping us with it. And this could be some of the processed foods, some of the toxins that are present in our foods, genetically modified foods. And then one really easy thing is fluoride in our water supply. Unfortunately, that can have some adverse effects on the thyroid.
Food sensitivities—this is a biggie. I often times, unfortunately, will find that’s the only thing that people focus on. Food sensitivities are a big deal when you have Hashimoto’s. There are going to be foods that are reactive that can make your thyroid condition and your symptoms worse. We want to make sure that we’re removing those foods, but we don’t want to get obsessive about removing every food forever. Because the key is to restore our body’s ability to tolerate as many foods as possible. And then intestinal permeability.
So we want to always focus on that piece with nutrition, just because that can be contributing to everything else that we may see. So I would say those are kind of like my big goals for people of things they can do with food.
Chris Kresser: Those are great goals. And where do you come down with AIP and the autoimmune protocol?
Izabella Wentz: I think that AIP is a wonderful protocol. I have found in my experience that people typically do best on a Paleo-like diet. I did some outcomes research with my clients, initially, and then with readers. A majority of them do best with either a gluten-free diet—so 88 percent of people find that gluten is not their friend. Being gluten free makes them feel better. And then we’ve got about 80 percent improvement of people with a Paleo diet and close to that with autoimmune Paleo.
I’m not the person that’ll say, like, every single person with Hashimoto’s needs to be 100 percent autoimmune Paleo. I feel like everybody needs to be individualized. These diets are amazing templates. I will say, in my book I have three different dietary templates, and one of them is the introduction where it’s gluten free, dairy free, soy free, and you can start there. And then if you feel, if you still have more symptoms and issues, then you’ll eliminate more foods until you get to more of a Paleo template.
And then if you don’t do well there or if you plateau there, then you may eliminate more foods to get into the autoimmune Paleo-like template. Or the other option I recommend for people, depending on where they are, is start off with the autoimmune Paleo template, and then you’ll add in more foods back. And there’s different reasons and different seasons, and people may, there might be foods on autoimmune Paleo that they may not do well with. And there may be foods that they’re just fine with.
So it’s a matter of, I feel like you want to have a good template. You want to start off there and pick one that fits best for you right now. And then you’re going to want to modify that based on your individuality. Like, my goal for people is to become their own nutrition gurus and to kind of awaken their intuitive ability to figure out which foods serve them, which ones cause them harm.
Chris Kresser: Yeah, I love that approach. And it’s very similar to how I think about it. AIP can be a game-changer for some people, there’s no doubt about that. I’ve seen that. There’s actually even now some peer-reviewed research supporting its use for Crohn’s and inflammatory bowel disease and found that it achieved similar or better results than standard immunosuppressive drugs, which is pretty incredible.
So we know that that can have a great impact for some people. At the same time, it is an extremely restrictive diet and it’s not necessary for all people. And I think there has been a sort of misconception. I see this in my patients. Patients come in and they tell me they’ve been on AIP, and I ask them if they’ve really had any benefit from it. And they say, “No, not that much compared to, like, a more expanded Paleo diet.” And I ask them why they kept doing it. And they tell me they just thought that that’s what they need to do because they have an autoimmune disease and everyone who has an autoimmune disease should be on AIP.
And my explanation to them is, “No, we don’t have, there’s not enough research to suggest that these foods are universally harmful for people, the foods that are removed in an AIP approach are universally harmful and therefore, anyone with any kind of autoimmune disease should be on it for life.” I think that’s a huge stretch from what the research has told us so far. And it sounds like you and I are on the same page, where people should basically eat as diverse a diet within a template of healthy foods as they can tolerate and feel good on.
Izabella Wentz: Absolutely. And in my experience, it’s not necessarily that the foods are evil. It’s that we have a leaky gut for whatever reason. And whatever foods we’re eating when our gut is leaky are the foods that we’re going to react to. And the autoimmune Paleo diet tends to work in our Western society because it eliminates some of the most common foods that are over-eaten.
And so a person, let’s say they were on an all-coconut diet every day, then they got an H. pylori infection or they got a Blastocystis hominis infection and were not able to digest fat properly. Because of some digestive enzyme deficiency, they may end up becoming sensitive to coconut milk. And so, and I think coconut is an amazing, hypoallergenic food. But we really want to look at the individual and what their sensitivities are. And we also want to do a whole-person approach. We don’t just want to say keep removing more foods and you’ll be healed. We want to say, “Okay, why are you sensitive to these foods? What can we do to reduce that sensitivity?” And some of the things that I talk about in my book are going to be digestive enzymes. We’re going to be addressing nutrient deficiencies.
I recommend actually trying to rotate foods so you’re not eating the same thing for breakfast, lunch, and dinner every day. You’re giving your body an opportunity to sort of recover from eating the same things over and over. And the big piece of course, and I always tell this to everybody, is if you’re not getting better on a diet for three months, there’s a really important thing to consider is looking at other things. Working with the Functional Medicine practitioner. You have a gut infection that’s making you intolerant to everything.
Chris Kresser: So important, so important. And I would say—and we don’t have time to go into detail on this. I’ll be talking about it with another guest in an upcoming podcast, but there’s so much, I think, so many different converging lines of evidence now that suggest that we can heal food intolerances, in many cases. Maybe not in all cases, but by addressing gut permeability, by addressing the gut–brain axis, and looking at the gut as the second brain in the nervous system, taking steps to reduce stress, there’s a lot that we can do.
And as you said, Izabella, in Functional Medicine we’re really concerned with the root cause. Food is often not the root cause. It’s a trigger because there’s an existing root cause of a leaky gut or a disrupted gut microbiome. So we need to always remember that removing the trigger does not necessarily address the cause. It could be one necessary step, at least for some period of time, but it’s often not enough on its own to address that underlying cause.
Izabella Wentz: Yeah, I’ll be really looking forward to that discussion. I personally have seen a lot of success in people getting rid of food sensitivities with some of the strategies I mentioned and also with using systemic enzymes. That’s been a little nice thing that people can add. And that was my goal in Food Pharmacology was to include all these different helpful things so people weren’t just removing more foods and they have this comprehensive guide to become, like I said, their own nutrition guru.
Chris Kresser: Great. Well, thank you so much for joining us, Izabella. I love your most recent book, Hashimoto’s Food Pharmacology. I think it’s such an important contribution because as we’ve discussed all along, the standard of care of just prescribing thyroid hormone, especially Synthroid, or levothyroxine, is just not sufficient for people with hypothyroidism and Hashimoto’s. And looking at things from a more holistic view is really going to make a far bigger impact long term for everybody. So where can they find the book and learn more about it?
Izabella Wentz: The book can be found on Amazon and Barnes & Noble, wherever books are sold. And hopefully it helps keep people on their health journeys. I know it’s hard when you’re first diagnosed and trying to figure it out. And hopefully this gives people a bit of a tool to take charge of their own health. Thank you for having me. It’s such a pleasure to be here with you. I’m a huge fan of your work, and thank you for the work that you’re doing in the world.
Chris Kresser: Well, you’re welcome, and thank you, Izabella. And where can people find more about your work?
Izabella Wentz: Thyroidpharmacist.com is where I usually hang out. It’s my website, and I share all kinds of new research about Hashimoto’s and some of the things that have worked for me, as well as my clients and readers.
Chris Kresser: And I think you have a very active Facebook community.
Izabella Wentz: Absolutely. You can find me on Facebook.com/thyroidlifestyle, or search for Izabella Wentz or Thyroid Pharmacist, you’ll be able to find me. I usually pop in there just to say hello and answer readers’ questions on a daily basis.
Chris Kresser: Great, great. Well, thanks again, and good luck. And we’d love to have you back on the show in the future.
Izabella Wentz: Thank you so much, and you have a wonderful day.
Chris Kresser: You too, take care.
Better supplementation. Fewer supplements.
Close the nutrient gap to feel and perform your best.
A daily stack of supplements designed to meet your most critical needs.
Topics
Tags
Affiliate Disclosure
This website contains affiliate links, which means Chris may receive a percentage of any product or service you purchase using the links in the articles or advertisements. You will pay the same price for all products and services, and your purchase helps support Chris‘s ongoing research and work. Thanks for your support! | And then one really easy thing is fluoride in our water supply. Unfortunately, that can have some adverse effects on the thyroid.
Food sensitivities—this is a biggie. I often times, unfortunately, will find that’s the only thing that people focus on. Food sensitivities are a big deal when you have Hashimoto’s. There are going to be foods that are reactive that can make your thyroid condition and your symptoms worse. We want to make sure that we’re removing those foods, but we don’t want to get obsessive about removing every food forever. Because the key is to restore our body’s ability to tolerate as many foods as possible. And then intestinal permeability.
So we want to always focus on that piece with nutrition, just because that can be contributing to everything else that we may see. So I would say those are kind of like my big goals for people of things they can do with food.
Chris Kresser: Those are great goals. And where do you come down with AIP and the autoimmune protocol?
Izabella Wentz: I think that AIP is a wonderful protocol. I have found in my experience that people typically do best on a Paleo-like diet. I did some outcomes research with my clients, initially, and then with readers. A majority of them do best with either a gluten-free diet—so 88 percent of people find that gluten is not their friend. Being gluten free makes them feel better. And then we’ve got about 80 percent improvement of people with a Paleo diet and close to that with autoimmune Paleo.
I’m not the person that’ll say, like, every single person with Hashimoto’s needs to be 100 percent autoimmune Paleo. I feel like everybody needs to be individualized. These diets are amazing templates. I will say, in my book I have three different dietary templates, and one of them is the introduction where it’s gluten free, dairy free, soy free, and you can start there. | no |
Paleo Diet | Can the Paleo diet cause thyroid problems? | no_statement | the "paleo" "diet" does not "cause" "thyroid" "problems".. there is no evidence to suggest that the "paleo" "diet" "causes" "thyroid" "problems". | https://drruscio.com/natural-remedies-for-hyperthyroidism/ | 9 Natural Remedies for Hyperthyroidism and Graves' Disease | How To Care for an Overactive Thyroid and Graves’ Disease
Conventional treatments for hyperthyroidism — which is much more common for women than for men— can come with significant side effects or permanent damage to your thyroid gland. The good news is that there are a lot of other options. Natural remedies for hyperthyroidism and general thyroid function improvement include:
Dietary changes, such as a gluten-free diet
Specific supplements, such as selenium, probiotics, and vitamin D
Herbs, such as bugleweed and lemon balm
Let’s explore what hyperthyroidism is, the risks of conventional treatment, and the many natural remedies for hyperthyroidism.
What Is Hyperthyroidism?
Your thyroid gland is located at the front of your neck, and it produces thyroid hormones. These hormones regulate many essential endocrine functions in your body, including energy production, digestive function, and more.
Hyperthyroidism — also called an overactive thyroid — is when your thyroid gland produces too much thyroid hormone. The most common reason for hyperthyroidism is an autoimmune attack on the thyroid gland called Graves’ disease. Though it’s far less common than other thyroid disorders, an estimated one in 200 Americans has Graves’ disease [1]. The majority of thyroid patients, including hyperthyroid patients, are women [2, 3].
Thick, red skin usually on the shins or tops of the feet (Graves’ dermopathy)
Your doctor usually diagnoses hyperthyroidism with:
A blood test for levels of thyroid hormones
Or, a radioactive iodine uptake test
If your blood test shows low TSH (thyroid-stimulating hormone) and high free T4 thyroid hormone, this means you are hyperthyroid [5]. If you also have elevated thyroid antibodies, including thyroid-stimulating immunoglobulins (TSI), thyroid peroxidase (TPO), or thyroglobulin (TG) antibodies, you may be diagnosed with Graves’ disease [6, 7].
The radioactive iodine test can help rule out other possibilities, such as thyroid nodules, toxic multinodular goiter, or thyroid cancer [8]. This test does have some potential side effects, so be sure to discuss it with your doctor before taking it.
Causes of Hyperthyroid Disease
There are three main causes of hyperthyroidism:
Graves’ disease, an autoimmune disease of the thyroid gland that leads to hyperthyroidism. Graves’ disease is the most common cause of hyperthyroidism.
Thyroid nodules may affect the production of thyroid hormone and induce hyperthyroidism [13].
No matter the cause, it’s very important to get hyperthyroidism under control. As excessive thyroid activity can lead to heart damage and a life-threatening “thyroid storm,” we want to utilize effective treatment approaches that not only decrease the effects of excessive thyroid hormone, but, if possible, address the root causes for hyperthyroidism in the first place [14].
Natural Treatment Options for Hyperthyroidism and Thyroid Wellness
Treatment of hyperthyroidism must accomplish two goals:
Stop the damaging effects of excess thyroid hormone
Resolve the root causes so that the symptoms stop and don’t recur
Conventional treatment for hyperthyroidism tries to address both of these but doesn’t address the frequently underlying autoimmunity. Anti-thyroid medications — such as methimazole or drugs like beta-blockers that reduce the potential heart damage of excess thyroid hormone — are the first level of treatment for hyperthyroidism. While these medications can help reduce symptoms of hyperthyroidism, they do not significantly address the underlying cause(s) for excess thyroid hormone.
Secondary conventional treatments, such as radioactive iodine therapy or thyroid surgery, permanently damage or remove the thyroid gland to permanently stop the production of thyroid hormones. This stops hyperthyroid symptoms and the excess circulating thyroid hormones. However, destruction of the thyroid gland leaves patients hypothyroid and needing T4 (thyroxine) replacement therapy for life and still doesn’t address the underlying autoimmunity.
The good news is there are natural remedies for hyperthyroidism that rival conventional treatments in their effectiveness and also have fewer risks, consequences, and side effects. They are also likely to address the underlying autoimmunity and other root causes of hyperthyroidism. Let’s discuss.
Intestinal permeability [16] — also called leaky gut — is suspected to contribute to the development of autoimmune disease [17, 18, 19, 20, 21, 22, 23]. We know that imbalanced gut bacteria can increase intestinal permeability [24, 25, 26], as can eating certain foods, such as gluten [27].
Most research on thyroid health and diet has studied how different foods impact an underactive thyroid (also known as hypothyroidism). However, some of these studies may be relevant for autoimmune hyperthyroidism, because they show an anti-inflammatory diet can reduce thyroid antibodies.
For example, a gluten-free diet was shown in one study to reduce thyroid antibodies in a group of women with Hashimoto’s thyroiditis [28]. A gene associated with Graves’ disease (CTLA-4) [29] is also associated with celiac disease [30, 31], indicating that gluten sensitivity may be a factor for some Graves’ patients.
A simple, anti-inflammatory, whole-food diet that is gluten-free, nutrient-dense, and high in healthy antioxidants, like the paleo diet, is a great place to start improving your gut and thyroid health and to improve autoimmunity. The paleo diet has been shown to reduce inflammation by reducing exposure to foods that may trigger an immune response [32, 33].
Natural Remedies for Hyperthyroidism: Supplements
Supplements have a lot of promise as natural remedies for hyperthyroidism and play one of a few different roles. These include:
Reducing thyroid antibodies
Blocking the action of excess thyroid hormones
Reducing levels of thyroid hormones
Reducing hyperthyroid symptoms
Preventing relapse
Let’s review what we know about supplements for hyperthyroidism.
Selenium
Selenium, a mineral that is used as a dietary supplement, has a number of specific, documented benefits for Graves’ disease.
Patients with Graves’ disease are more likely to have lower selenium levels [34], and a meta-analysis showed that patients with high antibody levels are more likely to have a relapse [35]. Selenium has been shown to reduce antibodies and the symptoms associated with Graves’ disease [36], and higher selenium blood levels have been shown to reduce the relapse rate of Graves’ [37].
Overall, this is very good evidence that selenium supplementation is worth a trial in your hyperthyroidism treatment plan.
L-Carnitine
L-Carnitine is an amino acid supplement that has been shown to reduce or prevent hyperthyroid symptoms. It’s fast-acting, has a very low risk of side effects, and is even safe for pregnant women with Graves’ disease [46]. A clinical trial found that L-carnitine had a positive effect on [47]:
Weakness and fatigue
Shortness of breath
Palpitations
Nervousness
Insomnia
Tremors
Heart rate
Bone mineral density
However, in this study, L-carnitine did not affect the levels of TSH, free T4, or free T3 thyroid hormone.
L-carnitine can also be used to treat a “thyroid storm”, the most severe, life-threatening form of hyperthyroidism [48, 49].
Lemon Balm & Bugleweed
Two herbs, lemon balm (Melissa officinalis) and bugleweed (Lycopus europaeus), have been shown in limited studies to reduce hyperthyroid symptoms and to block or reduce thyroid hormones.
In one study, bugleweed was shown to be as effective as beta-blockers for protecting the heart from damage from hyperthyroidism [50]. In another study, it was shown to reduce an elevated heart rate from Graves’ disease in humans and rats [51, 52].
Additional studies have indicated that bugleweed and lemon balm may block or decrease thyroid-stimulating hormone (TSH) and reduce T3 and T4 hormone levels, which would reduce the symptoms of hyperthyroidism [53].
The evidence here is more preliminary and lower quality than with selenium or L-carnitine, but bugleweed and lemon balm are certainly worth considering as a short-term trial if you are hyperthyroid. Hopefully, future research will confirm these effects in larger samples.
Short-Term Iodine
The iodine molecule is the backbone of thyroid hormones, but curiously, research suggests that excess iodine may trigger hypothyroidism [54, 55], which makes iodine potentially useful for treating hyperthyroidism. A small study showed that 150 mg per day of potassium iodide reversed hyperthyroidism in some patients [56]. However, the study noted that the effect wasn’t permanent. Therefore, iodine may help you get your symptom under control while you use other natural remedies for hyperthyroidism.
Probiotics
It might not seem like probiotics would have much to do with thyroid disease, but a growing body of research shows that thyroid patients very often have gut imbalances. People with thyroid disease more often have SIBO [57, 58], leaky gut [59], low stomach acid [60, 61, 62], and celiac disease [63], as well as gut infections like H. pylori [64] or parasites [65]. One particular study noted a strong association between H. pylori infection and Graves’ disease [66].
Probiotics help rebalance the gut microbiome and the immune system, reduce gut inflammation, repair the gut lining, and may improve hyperthyroid symptoms, including anxiety [67]. Even better, probiotics have a very low incidence of negative side effects compared to conventional treatment.
Vitamin D
Most thyroid-related vitamin D research has studied hypothyroid patients. This research suggests that vitamin D deficiency may be associated with higher levels of thyroid antibodies [68] and that supplementation with vitamin D may decrease them [69]. But one study showed that hyperthyroid patients who had lower vitamin D levels were more likely to relapse [70].
Considered together, these data suggest vitamin D supplementation may help reduce thyroid antibodies and relapse after treatment for hyperthyroidism.
Stress Reduction
Stress reduction is good supportive care, no matter what your health condition is. This is especially true for hyperthyroidism, where common symptoms include an increased heart rate, palpitations, and anxiety. There is no direct evidence that stress reduction practices can improve Graves’ disease or hyperthyroidism, but practices such as meditation, yoga, or cognitive behavioral therapy may support your healing process while you work on other treatments.
The Bottom Line
You don’t need to destroy your thyroid gland to get control of your hyperthyroidism. Simple diet changes, supplements, and stress reduction, sometimes alongside medication, can bring your body back into balance. If you need support managing your thyroid problems, consider becoming a patient at the Ruscio Institute for Functional Health.
Discussion
I care about answering your questions and sharing my knowledge with you. Leave a comment or connect with me on social media asking any health question you may have and I just might incorporate it into our next listener questions podcast episode just for you!
Get your free gut health guide here
Thank you for subscribing
Disclaimer: (1) The information provided on this website is for educational purposes only and is not intended to diagnose or treat any disease. Please do not apply any of this information without first speaking with your doctor. (2) The Ruscio Institute is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. (3) Amazon and the Amazon logo are trademarks of Amazon.com, Inc, or its affiliates. | -free diet was shown in one study to reduce thyroid antibodies in a group of women with Hashimoto’s thyroiditis [28]. A gene associated with Graves’ disease (CTLA-4) [29] is also associated with celiac disease [30, 31], indicating that gluten sensitivity may be a factor for some Graves’ patients.
A simple, anti-inflammatory, whole-food diet that is gluten-free, nutrient-dense, and high in healthy antioxidants, like the paleo diet, is a great place to start improving your gut and thyroid health and to improve autoimmunity. The paleo diet has been shown to reduce inflammation by reducing exposure to foods that may trigger an immune response [32, 33].
Natural Remedies for Hyperthyroidism: Supplements
Supplements have a lot of promise as natural remedies for hyperthyroidism and play one of a few different roles. These include:
Reducing thyroid antibodies
Blocking the action of excess thyroid hormones
Reducing levels of thyroid hormones
Reducing hyperthyroid symptoms
Preventing relapse
Let’s review what we know about supplements for hyperthyroidism.
Selenium
Selenium, a mineral that is used as a dietary supplement, has a number of specific, documented benefits for Graves’ disease.
Patients with Graves’ disease are more likely to have lower selenium levels [34], and a meta-analysis showed that patients with high antibody levels are more likely to have a relapse [35]. Selenium has been shown to reduce antibodies and the symptoms associated with Graves’ disease [36], and higher selenium blood levels have been shown to reduce the relapse rate of Graves’ [37].
Overall, this is very good evidence that selenium supplementation is worth a trial in your hyperthyroidism treatment plan.
L-Carnitine
L-Carnitine is an amino acid supplement that has been shown to reduce or prevent hyperthyroid symptoms. | no |
Paleo Diet | Can the Paleo diet cause thyroid problems? | no_statement | the "paleo" "diet" does not "cause" "thyroid" "problems".. there is no evidence to suggest that the "paleo" "diet" "causes" "thyroid" "problems". | https://thepaleodiet.com/vegetarian-vegan-diets-nutritional-disasters-part-3/ | Vegetarian and Vegan Diets, Part 3: Other Nutrient Deficiencies ... | Vegetarian and Vegan Diets, Part 3: Other Nutrient Deficiencies
Vegetarian diets: Other nutritional shortcomings
You don’t have to look any further than the ADA’s Position Statement28 or the USDA’s recommendations on vegetarian diets142 to discover additional nutrient shortcomings caused by pure plant-based diets. The ADA matter of factly mentions that “…key nutrients for vegetarians include protein, n-3 fatty acids, iron, zinc, iodine, calcium, and vitamins D and B12..“28 The USDA notes that “…vegetarians may need to focus on…iron, calcium, zinc, and vitamin B12.”142
These subtle admissions of potential nutrient deficiency problems associated with vegetarian diets represent just the tip of the iceberg. There is little scientific evidence to show that people eating a lifelong plant-based diet can achieve adequate dietary intakes of omega-3 fatty acids (EPA and DHA), iron, zinc, iodine, calcium, vitamin D, vitamin B6 and taurine, an amino acid (without taking supplements or eating fortified foods).
Mineral deficiencies and vegetarian diets
One of the major issues assessing whether any diet provides sufficient nutrients has to do with whether or not the vitamins and minerals measured in certain foods actually get absorbed into our bodies. This is called the bioavailability of vitamins and minerals in foods. A food may contain a particular essential nutrient, but it isn’t bioavailable if we can’t absorb and use it.
Phytate, which is found in some plant foods, prevents the absorption of essential minerals. Whole grains and legumes are sources of phytate. Accordingly, our bodies have difficulty extracting certain minerals from these foods because they are tightly bound to phytate. Phytate in whole grains impairs calcium absorption and may adversely affect bone health. Further, phytate also binds zinc, iron, and magnesium, thereby interfering with their assimilation and incorporation into our cells.
Because vegetarian diets are very hard to follow without including lots of whole grains, beans, soy, and legumes, they are inherently high in phytate. This is why it is difficult or impossible for vegetarians and vegans to maintain adequate body stores of calcium, zinc, and iron.
Zinc Deficiencies in Vegetarian Diets
In part 2 of this series, we discussed how zinc is crucial for normal male reproductive function. However, it is also required for good health and disease resistance in virtually every cell in our bodies, whether you are a man, woman, or child.20, 41
If you have ever experienced painful cracked heels or nose bleeds that just wouldn’t stop, try rubbing zinc oxide ointments on these wounds – you will be amazed at how rapidly zinc can heal these stubborn sores. For many, diet is causing marginal zinc status and deficiencies. Anybody eating excessive whole grains and/or legumes and not eating meat, fish or animal products on a regular basis45, 59, 62 puts themselves at risk for all illnesses and health problems associated with borderline or deficient zinc intake.
Iron Deficiencies in Vegetarian Diets
Your body stores of iron run hand-in-hand with zinc. The same types of diets that produce zinc deficiencies – high phytate vegetarian diets based upon whole grains, beans, soy and other legumes – also create iron deficiencies5, 135 which are the most common nutrient deficits worldwide.
In the U.S. 9% of all women between 12 and 49 years are iron deficient, while 4% of 3 to 5 year old children have insufficient stores of this crucial mineral.25 If you are pregnant, low iron status increases your risk of dying during childbirth, and frequently causes low birth weights and preterm deliveries.
Even more disturbing is the potential for iron deficiencies to prevent normal mental development in our children and young adults.39, 90, 96 Plant based diets not only increase the risk of impaired cognitive function in your children, but will hamper your own mental functioning. Numerous experimental studies show that inadequate iron stores in adults can slow or impair tasks requiring concentration and mental clarity.73
One of the most impactful consequences of diets that cause iron deficiencies is that they make us fatigued and tired. If you are an athlete or have a demanding job requiring physical exertion, low iron stores will invariably reduce your performance. A recent (2009) experiment involving 219 female soldiers during military training showed that iron supplements increased performance for a 2 mile run and enhanced mood.92 Similarly a study by Dr. Hinton and colleagues demonstrated that iron supplements in iron deficient male and female athletes improved endurance performance and efficiency.56
Whether you are an athlete, a laborer or even an office worker, your best nutritional strategy to improve iron stores, add vigor to your life and improve performance is to eliminate whole grains and legumes from your diet by adopting The Paleo Diet.
As always the devil is in the details when it comes to getting correct answers to nutritional questions. There are scientific papers showing no difference between blood iron concentrations in vegetarians and meat eaters. But, what is important is how iron measurements were performed in these experiments. This information is absolutely essential to know if iron deficiencies exist or not. Any study examining blood levels of iron in vegetarians using either measurements of hemoglobin (an iron carrying substance in red blood cells) or hematocrit (the concentration of red blood cells) are unreliable indicators of long term iron status. A much better marker is an iron carrying molecule called ferritin.75 Virtually all epidemiological (population) studies of vegans or ovo/lacto vegetarians show them to be either deficient or borderline iron deficient when blood ferritin levels are measured.
When women were placed on lacto/ovo vegetarian diets, their intestinal iron absorption was reduced by 70%. Inexplicably, blood ferritin levels did not decline for the group as a whole, but it should be noted that nearly half of the subjects did experience drops in blood ferritin concentrations. You recall from earlier in this essay that vegetarian diets caused 7 out of 9 women to stop ovulating. With the cessation of menstrual periods, monthly blood loses also cease which in turn prevents monthly iron losses because blood is a rich source of iron. In any study evaluating blood iron stores in women, it is absolutely essential to know if their normal menstrual cycles were altered. Unfortunately, The study’s lead author, Dr. Hunt, did not provide us with this information, thereby making the correct interpretation of her experiment difficult or impossible..63
Iodine Deficiencies in Vegetarian Diets
A number of studies have reported that vegetarian and vegan diets increase the risk for iodine deficiency.40, 77, 102, 153 One study from Europe demonstrated that 80% of vegans and 25% of ovo/lacto vegetarians suffered from iodine deficiency.77
Additionally, a dietary intervention by Dr. Remer and colleagues in 1999 confirmed this epidemiological evidence.102 After only five days on ovo/lacto vegetarian diets, iodine status and function became impaired in healthy adults.102 The primary reason why vegetarian diets cause iodine deficiencies is that plant foods (except for seaweed) are generally poor sources of iodine compared to meat, eggs, poultry and fish. Gross deficiencies of iodine cause our thyroid glands to swell producing a condition known as goiter. Worse, iodine deficiency in pregnant women can result in severe birth defects called cretinism.141
Because salt is fortified with iodine, most people in the U.S. and Europe rarely develop gross iodine deficiencies.40, 140, 141 However moderate to mild iodine deficiencies appear in westernized countries, particularly among vegetarians and vegans.77, 102 Moderate iodine deficiency impairs normal growth in children and adversely affects mental development.140, 141, 152 A large meta analysis revealed that moderate childhood iodine deficiency lowered I.Q. by 12-13.5 points.153 Paleo Diets are not just good medicine for adults, but they also ensure normal physical and mental development in our children because of their high iodine content.
One of the problems with plant based diets is that they may put into play a vicious cycle that makes iodine deficiencies worse. When the thyroid gland’s iodine stores become depleted, certain antinutrients found in plant foods can gain a foot hold and further aggravate iodine shortages.
Soy beans and soy products are frequently a mainstay in vegetarian diets. They can promote inflammation66 and unfortunately soy contains certain antinutrients (isoflavones) that impair iodine metabolism in the thyroid gland.43, 95 But, this only happens when our body stores of iodine are already depleted.
So, plant based diets start by putting us at risk for developing iodine deficiencies and when this happens our bodies become vulnerable to plant antinutrients that worsen the pre-existing deficiency. The important point here is that antinutritional compounds have virtually zero effect upon our thyroid gland when our body stores of iodine are normal and fully replete. Because meats, fish, eggs and poultry are rich sources of iodine, you will never have to worry about this nutrient when you eat Paleo style.
Vitamin D and Vitamin B6 Deficiencies in Vegetarian Diets
In my paper, Cereal Grains: Humanity’s Double-Edged Sword, I have pointed out how excessive consumption of whole grains adversely affects vitamin D status in our bodies.148 Hence vitamin D deficiencies run rampant in vegetarians worldwide because it is nearly impossible to become a full-fledged vegetarian without eating lots of grains.
In the largest study of vegetarians ever undertaken (The Epic-Oxford Study), Dr. Crowe and fellow researchers reported that blood concentrations of vitamin D were highest in meat eaters and lowest in vegans and vegetarians.29 Nearly 8% of the vegans maintained clinical deficiencies of vitamin D.
Vitamin D is not really a vitamin at all, but rather a crucial hormone that impacts virtually every cell in our bodies.
Vegan or vegetarian diets also frequently cause vitamin B6 deficiencies. On paper, it would appear that vegetarian diets generally meet daily recommended intakes for vitamin B6. This assumption comes primarily from population surveys examining the foods that vegans and vegetarians normally eat. In contrast, when blood samples are analyzed from people relying upon plant-based diets, they reveal that long term vegetarians and vegans are frequently deficient in vitamin B6.
A recent study of 93 German vegans by Dr. Waldman and colleagues showed that 58% of these men and women suffered from vitamin B6 deficiencies despite seemingly adequate intakes of this essential nutrient.131 It turns out that the type of vitamin B-6 (pyridoxine glucoside) found in plant foods is poorly absorbed.47, 103 The presence of pyridoxine glucoside in plant foods along with fiber has been reported to reduce the bioavailability of vitamin B6 so that only 20 to 25% is absorbed and completely utilized.47
In contrast, vitamin B6 found in animal foods is easily assimilated, and an estimated 75 to 100% fully makes its way into our bloodstreams.47
Dr. Leklem’s laboratory at Oregon State University provided compelling evidence that vegetarian diets relying upon the plant form of vitamin B6 adversely affect their body’s overall vitamin B6 stores.47 Nine women were put on diets either high or low in the plant form of vitamin B6 (pyridoxine glucoside). After only 18 days, the high pyridoxine glucoside diets (the plant form) consistently lowered blood concentrations and other indices of vitamin B6 status.
Deficiencies in B6 elevate blood homocysteine concentrations and increase our risk for cardiovascular disease similar to shortages of folate and vitamin B12. Further, vitamin B6 is an important factor in normal immune system functioning149 and shortfalls of this crucial nutrient have been identified in depression150 and colorectal cancer.151
Omega-3 Fatty Acid Deficiencies in Vegetarian Diets
A few years ago I was involved in a series of experiments here at Colorado State University in which we were interested in determining how high- and low-salt diets affected exercise-induced asthma. Our working hypothesis was that high-salt diets would make measures of lung function worse, and low-salt diets would improve things. One of our concerns with this experiment was to somehow make sure our subjects had fully complied with either the high- or low-salt diets. Completely removing salt from your diet isn’t easy to do, and if some of our subjects had decided to sneak in a piece of pizza or some Doritos, it would mess up the experiment’s outcome.
Fortunately, there was an easy way to figure out if our subjects had been compliant with the prescribed diets. All we had to do was to spot check their urine, because measurement of urinary salt levels is an accurate gauge of dietary salt consumption. High urinary salt levels universally reflect high salt consumption, whereas low urinary salt concentrations indicate low salt consumption. Short of major disease, there is virtually no other way high amounts of salt in the urine don’t indicate high amounts of salt in the diet.
In a similar manner, there are equivalent telltale indicators of omega-3 fatty acids in our bloodstreams that tell us beyond a shadow of a doubt whether or not we have regularly consumed fish, seafood or other good sources these healthful fats.
The three main types of omega-3 fatty acids we need to concern ourselves with are EPA, DHA and ALA. EPA and DHA are called long chain omega-3 fatty acids and are only found in high amounts in fish, seafood, certain meats, and other foods of animal origin. Plant foods contain no EPA or DHA. They contain ALA which is a short chain fatty acid that can also be found in animal foods.
Both EPA and DHA in our red blood cells are markers of these important long chain omega 3 fatty acids in our diet. Without good dietary sources of EPA and DHA, our blood levels of EPA and DHA will decline. It is virtually impossible to achieve high blood levels of EPA and DHA without regularly consuming fish, seafood and certain meats and organ meats (particularly grass produced meats and organ meats).
One of the major nutritional shortcomings in vegans is that they obtain absolutely no EPA or DHA from their diets.108, 110, 111 Consequently, they are totally dependent upon plant based ALA, supplements, or fortified foods to obtain these healthful long-chain omega-3 fatty acids. So without supplements or fortified foods, all vegans will become deficient in EPA and DHA because plant based ALA is inefficiently converted into these long chain fatty acids in our bodies. The liver converts less than 5% of ALA into EPA and less than 1% of ALA into DHA.15, 97
Virtually every epidemiological study that has ever been published shows that vegans, who do not supplement or consume long chain omega-3 fortified foods, to be deficient in both EPA and DHA76, 88, 108, 110, 111 Lacto/ovo vegetarians don’t fare much better because milk and egg based vegetarian diets simply do not supply sufficient DHA or EPA to maintain normal blood concentrations.88, 111
Perhaps the single most important dietary recommendation to improve your health and prevent illness is to increase your dietary intake of EPA and DHA. Deficiencies in DHA and EPA represent a potent risk factor for many chronic diseases. Thousands of scientific papers covering an assortment of diseases clearly show the health benefits of these fatty acids. In randomized clinical trials in patients with pre-existing heart disease, omega-3 fatty acid supplements significantly reduced cardiovascular events (deaths, non-fatal heart attacks, and non-fatal strokes).19, 48, 138
Omega-3 fatty acids lessen the risk for heart disease through a number of means including a reduction in heart beat irregularities called arrhythmias, a decrease in blood clots, and reduced inflammation which is now known to be an chief factor causing atherosclerosis or artery clogging.
Taurine deficiencies in Vegetarian Diets
Taurine is an amino acid (actually a sulfonic acid because it lacks a carboxyl group) in our bloodstreams that has multiple functions in every cell of our bodies. Unfortunately, this nutrient is not present in any plant food and is found in low concentrations in milk (6 mg per cup).80 In contrast, all flesh foods are excellent sources of taurine.80 For example, a ¼ pound of dark meat from chicken provides 200mg of taurine. Shellfish are even richer still with over 800mg per quarter pound.
The daily taurine intake in non-vegetarians is about 150mg, whereas lacto/ovo vegetarians take in about 17mg per day, and vegans get none. Although our livers can manufacture taurine from precursor molecules, our capacity to do so is limited – so much so that this amino acid is regularly fortified in infant formulas.
As you might expect, studies of vegans show that their blood taurine levels are lower than meat eaters.81, 100 How depleted blood concentrations of taurine affect our overall health, is not entirely understood. Nevertheless, shortages of this amino acid and omega 3 fatty acids (EPA and DHA) may cause certain elements (platelets) in our blood to clot more rapidly which in turn increases our risk for cardiovascular disease.85, 91 Despite their meat free diets, vegetarians almost always exhibit abnormal platelets that excessively adhere to one another.
In one dietary intervention, Dr. Mezzano and colleagues demonstrated that after eight weeks of EPA and DHA supplementation normal platelet function was restored in a group of 18 lacto/ovo vegetarians.8 Compromised taurine status will never be a problem when you follow The Paleo Diet, because meat, fish, poultry, and animal products are consumed often.
If you have adopted, or are considering adopting a plant-based diet for reasons of improving your health, make sure you reread this series of articles and look up all of the references I have provided you. The evidence that vegetarian and vegan diets can cause a multitude of nutritional deficiencies is overwhelming and conclusive. Over the course of a lifetime, vegetarian diets will not reduce your risk of chronic disease and will not allow you to live longer. Rather, this abnormal way of eating will predispose you to a host of health problems and illnesses. Vegetarianism is an unnatural way of eating that has no evolutionary precedence in our species. No hunter-gatherer society ever consumed a meatless diet.
The Paleo Diet has been criticized and labelled a fad diet because it eliminates “two entire food groups” (grains and dairy). Yet, vegan diets also eliminating two food groups (dairy, meats and fish) and often escape the same criticism. If The Paleo Diet is a fad diet, then it is the world’s oldest.
90. McCann JC, Ames BN. An overview of evidence for a causal relation between iron deficiency during development and deficits in cognitive or behavioral function. Am J Clin Nutr. 2007 Apr;85(4):931-45. | One of the problems with plant based diets is that they may put into play a vicious cycle that makes iodine deficiencies worse. When the thyroid gland’s iodine stores become depleted, certain antinutrients found in plant foods can gain a foot hold and further aggravate iodine shortages.
Soy beans and soy products are frequently a mainstay in vegetarian diets. They can promote inflammation66 and unfortunately soy contains certain antinutrients (isoflavones) that impair iodine metabolism in the thyroid gland.43, 95 But, this only happens when our body stores of iodine are already depleted.
So, plant based diets start by putting us at risk for developing iodine deficiencies and when this happens our bodies become vulnerable to plant antinutrients that worsen the pre-existing deficiency. The important point here is that antinutritional compounds have virtually zero effect upon our thyroid gland when our body stores of iodine are normal and fully replete. Because meats, fish, eggs and poultry are rich sources of iodine, you will never have to worry about this nutrient when you eat Paleo style.
Vitamin D and Vitamin B6 Deficiencies in Vegetarian Diets
In my paper, Cereal Grains: Humanity’s Double-Edged Sword, I have pointed out how excessive consumption of whole grains adversely affects vitamin D status in our bodies.148 Hence vitamin D deficiencies run rampant in vegetarians worldwide because it is nearly impossible to become a full-fledged vegetarian without eating lots of grains.
| no |
Virology | Can the human immunodeficiency virus (HIV) be cured? | yes_statement | "hiv" can be "cured".. there is a "cure" for "hiv". | https://abcnews.go.com/Health/5th-person-confirmed-cured-hiv/story?id=97323361 | 5th person confirmed to be cured of HIV - ABC News | Researchers are announcing that a 53-year-old man in Germany has been cured of HIV.
Referred to as "the Dusseldorf patient" to protect his privacy, researchers said he is the fifth confirmed case of an HIV cure. Although the details of his successful treatment were first announced at a conference in 2019, researchers could not confirm he had been officially cured at that time.
Today, researchers announced the Dusseldorf patient still has no detectable virus in his body, even after stopping his HIV medication four years ago.
"It’s really cure, and not just, you know, long term remission," said Dr. Bjorn-Erik Ole Jensen, who presented details of the case in a new publication in "Nature Medicine."
"This obviously positive symbol makes hope, but there's a lot of work to do," Jensen said
For most people, HIV is a lifelong infection, and the virus is never fully eradicated. Thanks to modern medication, people with HIV can live long and healthy lives.
The Dusseldorf patient joins a small group of people who have been cured under extreme circumstances after a stem cell transplant, typically only performed in cancer patients who don’t have any other options. A stem cell transplant is a high-risk procedure that effectively replaces a person's immune system. The primary goal is to cure someone's cancer, but the procedure has also led to an HIV cure in a handful of cases.
Blood samples are seen in a lab.
STOCK PHOTO/ Manuel Romaris/Getty Images
HIV, or human immunodeficiency virus, enters and destroys the cells of the immune system. Without treatment, the continued damage can lead to AIDS, or acquired immunodeficiency syndrome, where a person cannot fight even a small infection.
With about 38.4 million people globally living with HIV, treatments have come a long way. Modern medication can keep the virus at bay, and studies looking into preventing HIV infection with a vaccine are also underway.
The first person with HIV cure was Timothy Ray Brown. Researchers published his case as the Berlin patient in 2009. That was followed by the London patient published in 2019. Most recently, The City of Hope and New York patients were published in 2022.
“I think we can get a lot of insights from this patient and from these similar cases of HIV cure," Jensen said. "These insights give us some hints where we could go to make the strategy safer."
All four of these patients had undergone stem cell transplants for their blood cancer treatment. Their donors also had the same HIV-resistant mutation that deletes a protein called CCR5, which HIV normally uses to enter the cell. Only 1% of the total population carries this genetic mutation that makes them resistant to HIV.
“When you hear about these HIV cure, it’s obviously, you know, incredible, given how challenging it’s been. But, it still remains the exception to the rule," said Dr. Todd Ellerin, director of infectious disease at South Shore Health.
The stem cell transplantation is a complicated procedure that comes with many risks, and it is too risky to offer it as a cure for everyone with HIV.
However, scientists are hopeful. Each time they cure a new patient, they gain valuable research insights that help them understand what it would take to find a cure for everyone.
“It is obviously a step forward in advancing the science and having us sort of understanding, in some ways, what it takes to cure HIV," Ellerin said.
Kaviya Sathyakumar, M.D., M.B.A., is a family medicine resident physician at Ocala Regional Medical Center in Florida and member of ABC News Medical Unit. | Researchers are announcing that a 53-year-old man in Germany has been cured of HIV.
Referred to as "the Dusseldorf patient" to protect his privacy, researchers said he is the fifth confirmed case of an HIV cure. Although the details of his successful treatment were first announced at a conference in 2019, researchers could not confirm he had been officially cured at that time.
Today, researchers announced the Dusseldorf patient still has no detectable virus in his body, even after stopping his HIV medication four years ago.
"It’s really cure, and not just, you know, long term remission," said Dr. Bjorn-Erik Ole Jensen, who presented details of the case in a new publication in "Nature Medicine. "
"This obviously positive symbol makes hope, but there's a lot of work to do," Jensen said
For most people, HIV is a lifelong infection, and the virus is never fully eradicated. Thanks to modern medication, people with HIV can live long and healthy lives.
The Dusseldorf patient joins a small group of people who have been cured under extreme circumstances after a stem cell transplant, typically only performed in cancer patients who don’t have any other options. A stem cell transplant is a high-risk procedure that effectively replaces a person's immune system. The primary goal is to cure someone's cancer, but the procedure has also led to an HIV cure in a handful of cases.
Blood samples are seen in a lab.
STOCK PHOTO/ Manuel Romaris/Getty Images
HIV, or human immunodeficiency virus, enters and destroys the cells of the immune system. Without treatment, the continued damage can lead to AIDS, or acquired immunodeficiency syndrome, where a person cannot fight even a small infection.
With about 38.4 million people globally living with HIV, treatments have come a long way. Modern medication can keep the virus at bay, and studies looking into preventing HIV infection with a vaccine are also underway.
The first person with HIV cure was Timothy Ray Brown. | yes |
Virology | Can the human immunodeficiency virus (HIV) be cured? | yes_statement | "hiv" can be "cured".. there is a "cure" for "hiv". | https://www.nbcnews.com/nbc-out/out-health-and-wellness/scientists-possibly-cured-hiv-woman-first-time-rcna16196 | Scientists have possibly cured HIV in a woman for the first time | An American research team reported that it has possibly cured HIV in a woman for the first time. Building on past successes, as well as failures, in the HIV-cure research field, these scientists used a cutting-edge stem cell transplant method that they expect will expand the pool of people who could receive similar treatment to several dozen annually.
Their patient stepped into a rarified club that includes three men whom scientists have cured, or very likely cured, of HIV. Researchers also know of two women whose own immune systems have, quite extraordinarily, apparently vanquished the virus.
Carl Dieffenbach, director of the Division of AIDS at the National Institute of Allergy and Infectious Diseases, one of multiple divisions of the National Institutes of Health that funds the research network behind the new case study, told NBC News that the accumulation of repeated apparent triumphs in curing HIV “continues to provide hope.”
“It’s important that there continues to be success along this line,” he said.
In the first case of what was ultimately deemed a successful HIV cure, investigators treated the American Timothy Ray Brown for acute myeloid leukemia, or AML. He received a stem cell transplant from a donor who had a rare genetic abnormality that grants the immune cells that HIV targets natural resistance to the virus. The strategy in Brown’s case, which was first made public in 2008, has since apparently cured HIV in two other people. But it has also failed in a string of others.
This therapeutic process is meant to replace an individual’s immune system with another person’s, treating their cancer while also curing their HIV. First, physicians must destroy the original immune system with chemotherapy and sometimes irradiation. The hope is that this also destroys as many immune cells as possible that still quietly harbor HIV despite effective antiretroviral treatment. Then, provided the transplanted HIV-resistant stem cells engraft properly, new viral copies that might emerge from any remaining infected cells will be unable to infect any other immune cells.
It is unethical, experts stress, to attempt an HIV cure through a stem cell transplant — a toxic, sometimes fatal procedure — in anyone who does not have a potentially fatal cancer or other condition that already makes them a candidate for such risky treatment.
Dr. Deborah Persaud, a pediatric infectious disease specialist at the Johns Hopkins University School of Medicine who chairs the NIH-funded scientific committee behind the new case study (the International Maternal Pediatric Adolescent AIDS Clinical Trials Network), said that “while we’re very excited” about the new case of possible HIV cure, the stem cell treatment method is “still not a feasible strategy for all but a handful of the millions of people living with HIV.”
The “New York patient,” as the woman is being called, because she received her treatment at New York-Presbyterian Weill Cornell Medical Center in New York City, was diagnosed with HIV in 2013 and leukemia in 2017.
Bryson and Persaud have partnered with a network of other researchers to conduct lab tests to evaluate the woman. At Weill Cornell, Dr. Jingmei Hsu and Dr. Koen van Besien from the stem cell transplant program paired with infectious disease specialist Dr. Marshall Glesby on patient care.
This team has long sought to mitigate the considerable challenge investigators face in finding a donor whose stem cells could both treat a patient’s cancer and cure their HIV.
Traditionally, such a donor must have a close enough human leukocyte antigen, or HLA, match to maximize the likelihood that the stem cell transplant will engraft well. The donor must also have the rare genetic abnormality conferring HIV resistance.
This genetic abnormality largely occurs in people with northern European ancestry, and even among people native to that area, at a rate of only about 1 percent. So for those lacking substantial similar ancestry, the chance of finding a suitable stem cell donor is particularly low.
In the United States, African Americans comprise about 40 percent and Hispanics about 25 percent of the approximately 1.2 million people with HIV; whites comprise some 28 percent.
Cutting-edge treatment
The procedure used to treat the New York patient, known as a haplo-cord transplant, was developed by the Weill Cornell team to expand cancer treatment options for people with blood malignancies who lack HLA-identical donors. First, the cancer patient receives a transplant of umbilical cord blood, which contains stem cells that amount to a powerful nascent immune system. A day later, they receive a larger graft of adult stem cells. The adult stem cells flourish rapidly, but over time they are entirely replaced by cord blood cells.
Compared with adult stem cells, cord blood is more adaptable, generally requires less of a close HLA match to succeed in treating cancer and causes fewer complications. Cord blood, however, does not typically yield enough cells to be effective as a cancer treatment in adults, so transplants of such blood have traditionally been largely limited to pediatric oncology. In haplo-cord transplants, the additional transplantation of stem cells from an adult donor, which provides a plethora of cells, can help compensate for the paucity of cord blood cells.
“The role of the adult donor cells is to hasten the early engraftment process and render the transplant easier and safer,” van Besien said.
For the New York patient, who has a mixed-race ancestry, the Weill Cornell team and its collaborators found the HIV-resistant genetic abnormality in the umbilical cord blood of an infant donor. They paired a transplant of those cells with stem cells from an adult donor. Both donors were only a partial HLA match to the woman, but the combination of the two transplants allowed for this.
“We estimate that there are approximately 50 patients per year in the U.S. who could benefit from this procedure,” van Besien said of the haplo-cord transplant’s use as an HIV-cure therapy. “The ability to use partially matched umbilical cord blood grafts greatly increases the likelihood of finding suitable donors for such patients.”
Another benefit of relying on cord blood is that banks of this resource are much easier to screen in large numbers for the HIV-resistance abnormality than the bone marrow registries from which oncologists find stem cell donors. Before the New York patient became a candidate for the haplo-cord treatment, Bryson and her collaborators had already screened thousands of cord blood samples in search of the genetic abnormality.
The woman’s transplant engrafted very well. She has been in remission from her leukemia for more than four years. Three years after her transplant, she and her clinicians discontinued her HIV treatment. Fourteen months later, she still has experienced no resurgent virus.
Multiple ultrasensitive tests can detect no sign in the woman’s immune cells of any HIV capable of replicating, nor can the researchers detect any HIV antibodies or immune cells programmed to go after the virus. They also drew immune cells from the woman and in a laboratory experiment attempted to infect them with HIV — to no avail.
“It would’ve been very difficult to find a match plus this rare mutation unless we were able to use cord blood cells,” Dr. Bryson said at Tuesday’s conference. “It does open up this approach for a greater diversity of population.”
Remaining cautious
At this stage, Bryson and her colleagues consider the woman in a state of HIV remission.
“You don’t want to over-call it,” Bryson said of favoring the word “remission” over “cure” at this stage.
Case in point: Johns Hopkins’ Deborah Persaud was the author of a case study she first presented in 2013 of a child in Mississippi who was in a state of what at the time she called a “functional cure.” After apparently contracting HIV from her mother in utero, the baby was treated with an atypically intensified antiretroviral regimen shortly after birth. When Persaud announced the case study, the toddler had been off of HIV treatment for 10 months with no viral rebound. News of this supposed HIV cure swept the globe and ignited a media frenzy. But the child’s virus wound up rebounding 27 months after her treatment interruption.
If enough time passes without any signs of active virus — a few years — the authors of this latest case study would consider the New York patient cured.
“I’m excited that it’s turned out so well for her,” Bryson said. The apparent success of the case, she said, has given researchers “more hope and more options for the future.”
Why is HIV so difficult to cure?
When the highly effective combination antiretroviral treatment for HIV arrived in 1996, Dr. David Ho, who was one of the architects of this therapeutic revolution and is the director of the Aaron Diamond AIDS Research Center in New York City, famously theorized that given enough time, such medications could eventually eradicate the virus from the body.
To date, there are a handful of cases of people who were started on antiretrovirals very soon after contracting HIV, later went off treatment and have remained in viral remission with no rebounding virus for years.
Otherwise, Ho’s prediction has proved false. During the past quarter century, HIV-cure researchers have learned in increasingly exacting detail what a daunting task it is not only to cure HIV, but to develop effective curative therapies that are safe and scalable.
HIV maintains such a permanent presence in the body because shortly after infection, the virus splices its genetic code into long-lived immune cells that will enter a resting state — meaning they stop churning out new viral copies. Antiretrovirals only work on replicating cells, so HIV can remain under the radar of such medications in resting cells for extended periods, sometimes years. Absent any HIV treatment, such cells may restart their engines at any time and repopulate the body with massive amounts of virus.
Timothy Brown’s case, published in 2009, ignited the HIV-cure research field, which has seen soaring financial investment since.
More than three years have passed since these two men have been off of HIV treatment with no viral rebound. Consequently, the authors of each of their case studies — University of Cambridge’s Ravindra K. Gupta and Dr. Björn Jensen of Düsseldorf University Hospital — each recently told NBC News their respective patient was “almost definitely” cured of the virus.
Since 2020, scientists have also announced the cases of two women whose own immune systems have apparently cured them of HIV. They are among the approximately 1 in 200 people with HIV known as “elite controllers,” whose immune systems can greatly suppress viral replication without medication. In their cases, their bodies went even further and apparently destroyed all functional virus.
A less toxic treatment
Another major upside of the haplo-cord transplant the New York patient received, compared to the treatment of her three male predecessors, is that the use of cord blood — for not entirely understood reasons — greatly reduces the risk of what’s known as graft vs. host disease. This is a potentially devastating inflammatory reaction in which the donor cells go to war with the recipient’s body. The men in the three other HIV-cure cases all experienced this, which in Brown’s case caused prolonged health problems.
Benjamin Ryan is independent journalist specializing in science and LGBTQ coverage. He contributes to NBC News, The New York Times, The Guardian and Thomson Reuters Foundation and has also written for The Washington Post, The Nation, The Atlantic and New York. | The apparent success of the case, she said, has given researchers “more hope and more options for the future.”
Why is HIV so difficult to cure?
When the highly effective combination antiretroviral treatment for HIV arrived in 1996, Dr. David Ho, who was one of the architects of this therapeutic revolution and is the director of the Aaron Diamond AIDS Research Center in New York City, famously theorized that given enough time, such medications could eventually eradicate the virus from the body.
To date, there are a handful of cases of people who were started on antiretrovirals very soon after contracting HIV, later went off treatment and have remained in viral remission with no rebounding virus for years.
Otherwise, Ho’s prediction has proved false. During the past quarter century, HIV-cure researchers have learned in increasingly exacting detail what a daunting task it is not only to cure HIV, but to develop effective curative therapies that are safe and scalable.
HIV maintains such a permanent presence in the body because shortly after infection, the virus splices its genetic code into long-lived immune cells that will enter a resting state — meaning they stop churning out new viral copies. Antiretrovirals only work on replicating cells, so HIV can remain under the radar of such medications in resting cells for extended periods, sometimes years. Absent any HIV treatment, such cells may restart their engines at any time and repopulate the body with massive amounts of virus.
Timothy Brown’s case, published in 2009, ignited the HIV-cure research field, which has seen soaring financial investment since.
More than three years have passed since these two men have been off of HIV treatment with no viral rebound. Consequently, the authors of each of their case studies — University of Cambridge’s Ravindra K. Gupta and Dr. Björn Jensen of Düsseldorf University Hospital — each recently told NBC News their respective patient was “almost definitely” cured of the virus.
| yes |
Virology | Can the human immunodeficiency virus (HIV) be cured? | yes_statement | "hiv" can be "cured".. there is a "cure" for "hiv". | https://hivinfo.nih.gov/understanding-hiv/fact-sheets/hiv-and-sexually-transmitted-diseases-stds | HIV and Sexually Transmitted Diseases (STDs) | NIH | HIV and Sexually Transmitted Diseases (STDs)
Key Points
Sexually transmitted diseases (STDs), also called sexually transmitted infections (STIs), are infections that spread from person to person through sexual activity, including anal, vaginal, or oral sex.
Many health care providers use the term “infection” instead of “disease”, because a person with an infection may have no symptoms but still require treatment. When untreated, an STI can become a disease.
Having an STD can make it easier to get HIV. For example, an STD can cause a sore or a break in the skin, which can make it easier for HIV to enter the body. Having HIV and another STD may increase the risk of HIV transmission.
To prevent STDs, including HIV, choose less risky sexual behaviors and use condoms correctly every time you have sex.
What is an STD?
STD stands for sexually transmitted disease, also called sexually transmitted infections (STIs). STDs are infections that spread from person to person through sexual activity, including anal, vaginal, or oral sex. STDs are caused by bacteria, parasites, and viruses.
Many health care providers use the term “infection” instead of “disease”, because a person with an infection may have no symptoms but still require treatment. When untreated, an STI can become a disease.
Having sex while using drugs or alcohol. Using drugs and alcohol can affect a person's judgement, which can lead to risky behaviors.
Having an STD can make it easier to get HIV. For example, an STD can cause a sore or a break in the skin, which can make it easier for HIV to enter the body. Having HIV and another STD may increase the risk of HIV transmission.
How can a person reduce the risk of getting an STD?
Sexual abstinence (never having vaginal, anal, or oral sex) is the only way to eliminate any chance of getting an STD. But if you are sexually active, you can take the following steps to lower your risk for STDs, including HIV.
How can a person with HIV prevent passing HIV to others?
Take HIV medicines daily. Treatment with HIV medicines (called antiretroviral therapy or ART) helps people with HIV live longer, healthier lives. One of the goals of ART is to reduce a person's viral load to an undetectable level. An undetectable viral load means that the level of HIV in the blood is too low to be detected by a viral load test. People with HIV who maintain an undetectable viral load have effectively no risk of transmitting HIV to their HIV-negative partner through sex.
If your viral load is not undetectable—or does not stay undetectable—you can still protect your partner from HIV by using condoms and choosing less risky sexual behaviors. Your partner can take medicine to prevent getting HIV, which is called pre-exposure prophylaxis (PrEP). PrEP is an HIV prevention option for people who do not have HIV but who are at risk of getting HIV. PrEP involves taking a specific HIV medicine every day to reduce the risk of getting HIV through sex or injection drug use.
What are the symptoms of STDs?
Symptoms of STDs may be different depending on the STD, and not everyone will experience the same STD symptoms. Examples of possible STD symptoms include painful urination (peeing), unusual discharge from the vagina or penis, and fever.
STDs may not always cause symptoms. Even if a person has no symptoms from an STD, it is still possible to pass the STD on to other people.
Talk to your health care provider about getting tested for STDs and ask your sex partner to do the same.
What is the treatment for STDs?
STDs caused by bacteria or parasites can be cured with medicine. There is no cure for STDs caused by viruses, but treatment can relieve or eliminate symptoms and help keep the STD under control. Treatment also reduces the risk of passing on the STD to a partner. For example, although there is no cure for HIV, HIV medicines can prevent HIV from advancing to AIDS and reduce the risk of HIV transmission.
Untreated STDs may lead to serious complications. For example, untreated gonorrhea in women can cause pelvic inflammatory disease, which may lead to infertility. Without treatment, HIV can gradually destroy the immune system and advance to AIDS. | How can a person with HIV prevent passing HIV to others?
Take HIV medicines daily. Treatment with HIV medicines (called antiretroviral therapy or ART) helps people with HIV live longer, healthier lives. One of the goals of ART is to reduce a person's viral load to an undetectable level. An undetectable viral load means that the level of HIV in the blood is too low to be detected by a viral load test. People with HIV who maintain an undetectable viral load have effectively no risk of transmitting HIV to their HIV-negative partner through sex.
If your viral load is not undetectable—or does not stay undetectable—you can still protect your partner from HIV by using condoms and choosing less risky sexual behaviors. Your partner can take medicine to prevent getting HIV, which is called pre-exposure prophylaxis (PrEP). PrEP is an HIV prevention option for people who do not have HIV but who are at risk of getting HIV. PrEP involves taking a specific HIV medicine every day to reduce the risk of getting HIV through sex or injection drug use.
What are the symptoms of STDs?
Symptoms of STDs may be different depending on the STD, and not everyone will experience the same STD symptoms. Examples of possible STD symptoms include painful urination (peeing), unusual discharge from the vagina or penis, and fever.
STDs may not always cause symptoms. Even if a person has no symptoms from an STD, it is still possible to pass the STD on to other people.
Talk to your health care provider about getting tested for STDs and ask your sex partner to do the same.
What is the treatment for STDs?
STDs caused by bacteria or parasites can be cured with medicine. There is no cure for STDs caused by viruses, but treatment can relieve or eliminate symptoms and help keep the STD under control. Treatment also reduces the risk of passing on the STD to a partner. | no |
Virology | Can the human immunodeficiency virus (HIV) be cured? | yes_statement | "hiv" can be "cured".. there is a "cure" for "hiv". | https://www.nbcnews.com/health/health-news/5th-person-likely-cured-hiv-another-long-term-remission-rcna40116 | A 5th person is likely cured of HIV, and another is in long-term ... | Two new cases presented Wednesday at the International AIDS Conference in Montreal have advanced the field of HIV cure science, demonstrating yet again that ridding the body of all copies of viable virus is indeed possible, and that prompting lasting viral remission also might be attainable.
In one case, scientists reported that a 66-year-old American man with HIV has possibly been cured of the virus through a stem cell transplant to treat blood cancer. The approach — which has demonstrated success or apparent success in four other cases — uses stem cells from a donor with a specific rare genetic abnormality that gives rise to immune cells naturally resistant to the virus.
In another case, Spanish researchers determined that a woman who received an immune-boosting regimen in 2006 is in a state of what they characterize as viral remission, meaning she still harbors viable HIV but her immune system has controlled the virus’s replication for over 15 years.
Experts stress, however, that it is not ethical to attempt to cure HIV through a stem cell transplant — a highly toxic and potentially fatal treatment — in anyone who is not already facing a potentially fatal blood cancer or other health condition that would make them a candidate for such a treatment.
“While a transplant is not an option for most people with HIV, these cases are still interesting, still inspiring and illuminate the search for a cure,” Dr. Sharon Lewin, an infectious disease specialist at the Peter Doherty Institute for Infection and Immunity at the University of Melbourne, told reporters on a call last week ahead of the conference.
There are also no guarantees of success through the stem cell transplant method. Researchers have failed to cure HIV using this approach in a slew of other people with the virus.
Nor is it clear that the immune-enhancing approach used in the Spanish patient will work in additional people with HIV. The scientists involved in that case told NBC News that much more research is needed to understand why the therapy appears to have worked so well in the woman — it failed in all participants in the clinical trial but her — and how to identify others in whom it might have a similar impact. They are trying to determine, for example, if specific facets of her genetics might favor a viral remission from the treatment and whether they could identify such a genetic profile in other people.
The ultimate goal of the HIV cure research field is to develop safe, effective, tolerable and, importantly, scalable therapies that could be made available to wide swaths of the global HIV population of some 38 million people. Experts in the field tend to think in terms of decades rather than years when hoping to achieve such a goal against a foe as complex as this virus.
The new cure case
Diagnosed with HIV in 1988, the man who received the stem cell transplant is both the oldest person to date — 63 years old at the time of the treatment — and the one living with HIV for the longest to achieve an apparent success from a stem cell transplant cure treatment.
The white male — dubbed the “City of Hope patient” after the Los Angeles cancer center where he received his transplant 3½ years ago — has been off of antiretroviral treatment for HIV for 17 months.
“We monitored him very closely, and to date we cannot find any evidence of HIV replicating in his system,” said Dr. Jana Dickter, an associate clinical professor in the Division of Infectious Diseases at City of Hope. Dickter is on the patient’s treatment team and presented his case at this week’s conference.
This means the man has experienced no viral rebound. And even through ultra-sensitive tests, including biopsies of the man’s intestines, researchers couldn’t find any signs of viable virus.
The man was at one time diagnosed with AIDS, meaning his immune system was critically suppressed. After taking some of the early antiretroviral therapies, such as AZT, that were once prescribed as individual agents and failed to treat HIV effectively, the man started a highly effective combination antiretroviral treatment in the 1990s.
He was treated with chemotherapy to send his leukemia into remission prior to his transplant. Because of his older age, he received a reduced intensity chemotherapy to prepare him for his stem cell transplant — a modified therapy that older people with blood cancers are better able to tolerate and that reduces the potential for transplant-related complications.
Next, the man received the stem cell transplant from the donor with an HIV-resistant genetic abnormality. This abnormality is seen largely among people with northern European ancestry, occurring at a rate of about 1% among those native to the region.
According to Dr. Joseph Alvarnas, a City of Hope hematologist and a co-author of the report, the new immune system from the donor gradually overtook the old one — a typical phenomenon.
Some two years after the stem cell transplant, the man and his physicians decided to interrupt his antiretroviral treatment. He has remained apparently viable-virus free ever since. Nevertheless, the study authors intend to monitor him for longer and to conduct further tests before they are ready to declare that he is definitely cured.
The viral remission case
A second report presented at the Montreal conference detailed the case of a 59-year-old woman in Spain who is considered to be in a state of viral remission.
The woman was enrolled in a clinical trial in Barcelona in 2006 of people receiving standard antiretroviral treatment. She was randomized to also receive 11 months of four therapies meant to prime the immune system to better fight the virus, according to Núria Climent, a biologist at the University of Barcelona Hospital Clinic, who presented the findings.
Then Climent and the research team decided to take the woman off her antiretrovirals, per the study’s planned protocol. She has now maintained a fully suppressed viral load for over 15 years. Unlike the handful of people either cured or possibly cured by stem cell transplants, however, she still harbors virus that is capable of producing viable new copies of itself.
Her body has actually controlled the virus more efficiently with the passing years, according to Dr. Juan Ambrosioni, an HIV physician in the Barcelona clinic.
Ambrosioni, Climent and their collaborators said they waited so long to present this woman’s case because it wasn’t until more recently that technological advances have allowed them to peer deeply into her immune system and determine how it is controlling HIV on its own.
“It’s great to have such a gaze,” Ambrosioni said, noting that “the point is to understand what is going on and to see if this can be replicated in other people.”
In particular, it appears that what are known as her memory-like NK cells and CD8 gamma-delta T cells are leading this effective immunological army.
The research team noted that they do not believe that the woman would have controlled HIV on her own without the immune-boosting treatment, because the mechanisms by which her immune cells appear to control HIV are different from those seen in “elite controllers,” the approximately 1 in 200 people with HIV whose immune systems can greatly suppress the virus without treatment.
Lewin, of Australia’s Peter Doherty Institute, told reporters last week that it is still difficult to judge whether the immune-boosting treatment the woman received actually caused her state of remission. Much more research is needed to answer that question and to determine if others might also benefit from the therapy she received, she said.
Four decades of HIV, a handful of cures
Over four decades, just five people have been cured or possibly cured of HIV.
The virus remains so vexingly difficult to cure because shortly after entering the body it infects types of long-lived immune cells that enter a resting, or latent, state. Because antiretroviral treatment only attacks HIV when infected cells are actively churning out new viral copies, these resting cells, which are known collectively as the viral reservoir and can stay latent for years, remain under the radar of standard treatment. These cells can return to an active state at any time. So if antiretrovirals are interrupted, they can quickly repopulate the body with virus.
The first person cured of HIV was the American Timothy Ray Brown, who, like the City of Hope patient, was diagnosed with AML. His case was announced in 2008 and then published in 2009. Two subsequent cases were announced at a conference in 2019, known as the Düsseldorf and London patients, who had AML and Hodgkin lymphoma, respectively. The London patient, Adam Castillejo, went public in 2020.
Compared with the City of Hope patient, Brown nearly died after the two rounds of full-dose chemotherapy and the full-body radiation he received. Both he and Castillejo had a devastating inflammatory reaction to their treatment called graft-versus-host disease.
Dr. Björn Jensen, of Düsseldorf University Hospital, the author of the German case study — one typically overlooked by HIV cure researchers and in media reports about cure science — said that with 44 months passed since his patient has been viral rebound-free and off of antiretrovirals, the man is “almost definitely” cured.
“We are very confident there will be no rebound of HIV in the future,” said Jensen, who noted that he is in the process of getting the case study published in a peer-reviewed journal.
For the first time, University of Cambridge’s Ravindra Gupta, the author of the London case study stated, in an email to NBC News, that with nearly five years passed since Castillejo has been off of HIV treatment with no viral rebound, he is “definitely” cured.
In February, a research team announced the first case of a woman and the first in a person of mixed race possibly being cured of the virus through a stem cell transplant. The case of this woman, who had leukemia and is known as the New York patient, represented a substantial advance in the HIV cure field because she was treated with a cutting-edge technique that uses an additional transplant of umbilical cord blood prior to providing the transplant of adult stem cells.
The combination of the two transplants, the study authors told NBC News in February, helps compensate for both the adult and infant donors being less of a close genetic match with the recipient. What’s more, the infant donor pool is much easier than the adult pool to scan for the key HIV-resistance genetic abnormality. These factors, the authors of the woman’s case study said, likely expand the potential number of people with HIV who would qualify for this treatment to about 50 per year
Asked about the New York patient’s health status, Dr. Koen van Besien, of the stem cell transplant program at Weill Cornell Medicine and New York-Presbyterian in New York City, said, “She continues to do well without detectable HIV.”
Over the past two years, investigators have announced the cases of two women who are elite controllers of HIV and who have vanquished the virus entirely through natural immunity. They are considered likely cured.
Scientists have also reported several cases over the past decade of people who began antiretroviral treatment very soon after contracting HIV and after later discontinuing the medications have remained in a state of viral remission for years without experiencing viral rebound.
Speaking of the reaction of the City of Hope patient, who prefers to remain anonymous, to his new HIV status, Dickter said: “He’s thrilled. He’s really excited to be in that situation where he doesn’t have to take these medications. This has just been life-changing.”
The man has lived through several dramatically different eras of the HIV epidemic, she noted.
“In the early days of HIV, he saw many of his friends and loved ones get sick and ultimately die from the disease,” Dickter said. “He also experienced so much stigma at that time.”
As for her own feelings about the case, Dickter said, “As an infectious disease doctor, I’d always hoped to be able to tell my HIV patients that there’s no evidence of virus remaining in their system.”
Benjamin Ryan is independent journalist specializing in science and LGBTQ coverage. He contributes to NBC News, The New York Times, The Guardian and Thomson Reuters Foundation and has also written for The Washington Post, The Nation, The Atlantic and New York. | “It’s great to have such a gaze,” Ambrosioni said, noting that “the point is to understand what is going on and to see if this can be replicated in other people.”
In particular, it appears that what are known as her memory-like NK cells and CD8 gamma-delta T cells are leading this effective immunological army.
The research team noted that they do not believe that the woman would have controlled HIV on her own without the immune-boosting treatment, because the mechanisms by which her immune cells appear to control HIV are different from those seen in “elite controllers,” the approximately 1 in 200 people with HIV whose immune systems can greatly suppress the virus without treatment.
Lewin, of Australia’s Peter Doherty Institute, told reporters last week that it is still difficult to judge whether the immune-boosting treatment the woman received actually caused her state of remission. Much more research is needed to answer that question and to determine if others might also benefit from the therapy she received, she said.
Four decades of HIV, a handful of cures
Over four decades, just five people have been cured or possibly cured of HIV.
The virus remains so vexingly difficult to cure because shortly after entering the body it infects types of long-lived immune cells that enter a resting, or latent, state. Because antiretroviral treatment only attacks HIV when infected cells are actively churning out new viral copies, these resting cells, which are known collectively as the viral reservoir and can stay latent for years, remain under the radar of standard treatment. These cells can return to an active state at any time. So if antiretrovirals are interrupted, they can quickly repopulate the body with virus.
The first person cured of HIV was the American Timothy Ray Brown, who, like the City of Hope patient, was diagnosed with AML. His case was announced in 2008 and then published in 2009. | yes |
Virology | Can the human immunodeficiency virus (HIV) be cured? | yes_statement | "hiv" can be "cured".. there is a "cure" for "hiv". | https://www.statnews.com/2021/11/15/scientists-report-finding-second-person-naturally-cured-of-hiv/ | Scientists report finding a second person 'naturally' cured of HIV ... | Scientists report finding a second person to be ‘naturally’ cured of HIV, raising hopes for future treatments
Xu Yu, an immunologist at the Ragon Institute of MGH, MIT, and Harvard and senior author of a new report on a second person to be "naturally" cured of HIV.Jessica Rinaldi/The Boston Globe
One evening in March 2020, a doctor walked out of a hospital in the Argentine city of Esperanza cradling a styrofoam cooler. He handed it to a young man who’d been waiting outside for hours, who nestled it securely in his car and sped off. His destination, a biomedical research institute in Buenos Aires, was 300 miles away, and he only had until midnight to reach it. That day, while his sister was inside the hospital giving birth to her first child, Argentina’s president had ordered a national lockdown to prevent further spread of the coronavirus, SARS-CoV-2, including strict controls on entering and leaving the nation’s capital. If the brother didn’t make it, the contents of the cooler — more than 500 million cells from his sister’s placenta — would be lost, along with any secrets they might be holding.
The woman was a scientific curiosity. Despite being diagnosed with HIV in 2013, she’d never shown any signs of illness. And traditional tests failed to turn up evidence that the virus was alive and replicating in her body. Only the presence of antibodies suggested she’d ever been infected. Since 2017 researchers in Argentina and in Massachusetts had been collecting blood samples from her, meticulously scanning the DNA of more than a billion cells, searching for signs that the virus was still hiding out, dormant, ready to roar to life if the conditions were right. They wanted to do the same with her placenta because even though it’s an organ of the fetus, it’s loaded with maternal immune cells — a target-rich environment to mine for stealth viruses.
advertisement
As the scientists reported Monday in Annals of Internal Medicine, they didn’t find any. Which means that the woman, who they are calling the “Esperanza Patient” to protect her privacy, appears to have eradicated the deadly virus from her body without the help of drugs or a bone marrow transplant — which would make her only the second person believed to have cured herself of HIV, without drugs or any other treatment.
“This gives us hope that the human immune system is powerful enough to control HIV and eliminate all the functional virus,” said Xu Yu, an immunologist at the Ragon Institute of MGH, MIT, and Harvard and senior author on the new report. “Time will tell, but we believe she has reached a sterilizing cure.” The discovery, which was previously announced at the Conference on Retroviruses and Opportunistic Infections in March, could help identify possible treatments, researchers said.
Only two times in history have doctors effectively cured HIV — in 2009 with the Berlin Patient and in 2019 with the London Patient — both times by putting the virus into sustained remission with a bone marrow transplant from a donor with a rare genetic mutation that makes cells resistant to HIV invasion. Those cases proved a cure was feasible, but transplants are expensive and dangerous, and donors difficult to find.
advertisement
“Curing HIV was always assumed to be impossible,” said Steven Deeks, a longtime HIV researcher and professor of medicine at the University of California, San Francisco who was not involved in the study. He and Yu have teamed up in the past to study HIV patients whose immune systems put up a fiercer fight than most. In a Nature study published last year, they found that such individuals had intact viral genomes — meaning the virus is capable of replicating — but they were integrated at places in the patients’ chromosomes that were far from sites of active transcription. In other words, they were squirreled away and locked up inside a dusty corner of the DNA archives.
Sign up for Morning Rounds
In one patient they examined, a 67-year old California woman named Loreen Willenberg, the researchers didn’t find any intact virus in more than 1.5 billion of her cells. Willenberg had maintained control of the virus for nearly three decades without the use of antiretroviral drugs. If the Esperanza Patient is the second person known to have been naturally cured of HIV, Willenberg is the first.
“With these possible natural cures providing a roadmap for a cure, I am hoping we can come up with an intervention that one day might work for everyone,” said Deeks.
About a decade into the AIDS pandemic, doctors began to find a handful of patients who tested positive for the HIV virus but experienced no symptoms, and were later found to have vanishingly low levels of the virus in their bodies. At the time, these case studies were presumed to be one-offs; maybe these fortunate few caught a glitchy strain of HIV that wasn’t particularly good at replicating, giving their immune systems a rare edge against a disease that was considered universally deadly until the first antiretroviral drugs were developed.
But the more doctors looked, the more such patients they discovered. The past few decades have revealed that people with unusually potent immune responses make up about 0.5% of the 38 million HIV-infected people on the planet. Scientists call these people “elite controllers,” and in recent years they have become the subject of intense international study.
Because their bodies represent a model of a cure for HIV, if researchers can figure out what makes them special, they might be able to bottle it up into medicines, gene therapies or other one-time treatments that could free millions from a lifetime of antiretroviral drug-taking. They might even find ways to boost the immune systems of non-responders — people whose natural defenses were so ravaged by HIV that they’re now hyper-susceptible to a host of other health woes.
One of HIV’s dirtiest tricks is that when it enters a cell — usually a T cell or other immune cell — it makes a DNA copy of itself that integrates into that cell’s genome. So when that cell’s protein-making machinery comes across that bit of viral code, it unwittingly builds more copies of the HIV invader. Antiretroviral drugs disrupt this process, buying patients’ immune systems time to find and kill these hijacked cellular factories. But some DNA copies of the viral blueprint persist — scientists call them proviruses. In theory, they could wake up and start making a virus at any time.
Paula Cannon, a molecular microbiologist who studies HIV and gene editing at the University of Southern California’s Keck School of Medicine, compares proviruses to embers lingering behind the fire of first infection, smoldering for years. If the wind kicks up just right, the fire rages to life. That’s why people need to take antiretroviral drugs for life and why they can never be cured; we have no way of attacking or wiping out these latent integrated HIV genomes. And until recently, there weren’t even good methods for detecting them. But Yu’s group has been at the forefront of developing methods that allow scientists to crack open billions of immune cells and sort through their DNA looking for the smoking remains of infections past.
“This paper is a nice showcase of the level of sophistication of the analyses that can be done now,” said Cannon. “Finding somebody who is an elite controller who not only is currently not exhibiting any HIV RNA viruses in her body, but also doesn’t look like she has the potential to do that any time in the future, isn’t exactly surprising, but it is exciting. The more we study people like this, the more I think some clues are going to come out that we’ll be able to apply to HIV-infected individuals more broadly.”
Deeks said he’s most curious to learn more about what happened during the first few days and weeks after the Esperanza Patient was infected. For some reason, her body didn’t develop antibodies to all the various HIV proteins one might expect. That suggests her natural defenses slammed the brakes on viral replication early, before the virus could spread and overwhelm her immune system. Usually, that only happens if someone starts antiretroviral drugs very early.
It can be a little tricky to study what happened in someone’s body nearly a decade ago. What’s left is the memory of the immune response the Esperanza Patient once mounted. Many of the immune system’s players are transient molecules, and unearthing evidence of them now may prove nearly impossible — like trying to find a fossil of a jellyfish or a flatworm. But Deeks said comparing her DNA or immune cell gene expression to other patients’ might reveal something interesting.
Those are the types of analyses Yu’s group is now working on, together with the Esperanza Patient’s physician, Natalia Laufer, an HIV researcher at El Instituto de Investigaciones Biomédicas en Retrovirus y SIDA in Buenos Aires who studies elite controllers. Their hope is that by combining data from their cohorts with others from around the world — including children in South Africa whose bodies have begun to control the virus after being on HIV drugs for most of their lives — that patterns of protection will begin to emerge that might one day be harnessed to produce cures.
In an email, the Esperanza Patient told STAT that she doesn’t feel special, but rather, blessed for the way the virus behaves in her body. “Just thinking that my condition might help achieve a cure for this virus makes me feel a great responsibility and commitment to make this a reality,” she wrote. Her first child is healthy and HIV-free, and she and her partner are now expecting a second, said the woman, who did not want to be named.
“It is such a beautiful coincidence that Esperanza is where she lives,” said Laufer. “Esperanza” translates, literally, to “hope.” That’s what Laufer said she felt when she met her patient in 2017.
“That individuals can be cured by themselves is a change in the paradigm of HIV,” Laufer said. She added the caveat that scientists may never be able to say “cure” for sure, because that would require the impossible task of sequencing every one of the patient’s cells. But, Laufer said, “we are seeing indications that it’s possible for some individuals to completely control infection with HIV. And that’s very, very different from what we thought 40 years ago.”
This story has been updated with the correct year doctors reported on the successful treatment of the Berlin Patient, Timothy Ray Brown. | “Esperanza Patient” to protect her privacy, appears to have eradicated the deadly virus from her body without the help of drugs or a bone marrow transplant — which would make her only the second person believed to have cured herself of HIV, without drugs or any other treatment.
“This gives us hope that the human immune system is powerful enough to control HIV and eliminate all the functional virus,” said Xu Yu, an immunologist at the Ragon Institute of MGH, MIT, and Harvard and senior author on the new report. “Time will tell, but we believe she has reached a sterilizing cure.” The discovery, which was previously announced at the Conference on Retroviruses and Opportunistic Infections in March, could help identify possible treatments, researchers said.
Only two times in history have doctors effectively cured HIV — in 2009 with the Berlin Patient and in 2019 with the London Patient — both times by putting the virus into sustained remission with a bone marrow transplant from a donor with a rare genetic mutation that makes cells resistant to HIV invasion. Those cases proved a cure was feasible, but transplants are expensive and dangerous, and donors difficult to find.
advertisement
“Curing HIV was always assumed to be impossible,” said Steven Deeks, a longtime HIV researcher and professor of medicine at the University of California, San Francisco who was not involved in the study. He and Yu have teamed up in the past to study HIV patients whose immune systems put up a fiercer fight than most. In a Nature study published last year, they found that such individuals had intact viral genomes — meaning the virus is capable of replicating — but they were integrated at places in the patients’ chromosomes that were far from sites of active transcription. In other words, they were squirreled away and locked up inside a dusty corner of the DNA archives.
Sign up for Morning Rounds
In one patient they examined, a 67-year old California woman named Loreen Willenberg, the researchers didn’t find any intact virus in more than 1.5 billion of her cells. | yes |
Virology | Can the human immunodeficiency virus (HIV) be cured? | yes_statement | "hiv" can be "cured".. there is a "cure" for "hiv". | https://www.nih.gov/news-events/nih-research-matters/mixed-race-woman-potentially-cured-hiv-using-stem-cell-transplant | Woman potentially cured of HIV using transplant with cord blood ... | You are here
At a Glance
A woman with leukemia is likely cured of HIV after receiving a transplant including stem cells from banked umbilical cord blood.
The result suggests a way to expand the pool of available stem cells for curing HIV in people who require transplants for other medical conditions.
HIV (yellow) attacks the immune system by destroying CD4+ T cells (red), a type of white blood cell that is vital to fighting off infection.NIAID
Three cases of HIV being cured have been reported to date. All three involved men with HIV and either leukemia or lymphoma. The men received transplants of stem cells from adult donors to treat their cancers. The stem cell donors all carried two copies of a mutation, CCR5 Δ32, that confers resistance to HIV. CCR5 is a receptor that HIV uses to infect cells. But very few people carry two copies of CCR5 Δ32, limiting the chances of finding a compatible donor, particularly for non-White patients.
A team led by Drs. Jingmei Hsu at Weill Cornell Medicine, Yvonne Bryson of the University of California, Los Angeles, and Deborah Persaud of the Johns Hopkins University School of Medicine used a modified approach to try to cure HIV. Their patient was a middle-aged woman who self-identified as mixed-race and had both HIV and fast-progressing, or acute, leukemia. Because of the difficulty in finding a compatible adult donor, the transplant included stem cells from banked umbilical cord blood. Umbilical cord blood stem cells don’t need as close a match for successful transplant as adult stem cells. If successful, this approach could expand the pool of CCR5 Δ32 stem cells available to those living with HIV.
The researchers obtained cord blood stem cells with two CCR5 Δ32 mutations that were a partial match for the patient. A challenge with transplanting cord blood stem cells is that it takes time for the cells to engraft in the body. So, the researchers infused the cord blood cells alongside stem cells from a relative of the patient. These did not carry the CCR5 Δ32 mutation but were partially compatible with the patient. In this type of transplant, the adult stem cells engraft quickly, but temporarily. This allows them to provide some immune function until the cord blood cells have a chance to take over. Results appeared in Cell on March 16, 2023.
As expected, the relative’s stem cells engrafted quickly, within two weeks of transplant. But by 14 weeks after transplant, the cord blood cells had completely taken over. A major risk associated with stem cell transplants is graft-versus-host disease—when transplanted immune cells attack the recipient’s body. But cord blood cells are less likely to do so and, in this case, the patient did not develop graft-versus-host disease.
The patient’s leukemia remains in remission more than five years after the transplant. Before the transplant, she controlled her HIV infection with antiretroviral drugs, but some HIV genetic material was still detectable. After transplant, no HIV DNA or RNA were detected. By a year after the transplant, she no longer had antibodies against HIV, which suggests HIV was no longer replicating in her body. Also, after the transplant, the stem cells in her blood resisted infection by various HIV strains in the lab.
About three years after the transplant, the patient stopped antiretroviral therapy. At the time the study results were written, 18 months after stopping treatment, the patient remained free of HIV infection. The researchers say she has now remained free of infection for nearly 30 months.
These results suggest that the cord blood stem cell transplant may have cured the patient of HIV. This would make her one of only four such patients, and the first woman to be so cured. Stem cell transplant remains a complex and risky procedure. It is only considered in people who need one to treat a life-threatening condition, such as leukemia—not to treat HIV alone. Even so, in people with HIV who do need a transplant, using cord blood stem cells could expand access to this treatment.
“It’s exceedingly rare for persons of color or diverse race to find a sufficiently matched, unrelated adult donor,” Bryson says. “Using cord blood cells broadens the opportunities for people of diverse ancestry who are living with HIV and require a transplant for other diseases to attain cures.”
—by Brian Doctrow, Ph.D.
Editor's Note: The headline and first bullet were modified soon after publication to better reflect the focus of the story.
Funding: NIH’s National Institute of Allergy and Infectious Diseases (NIAID), Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), and National Institute of Mental Health (NIMH). | You are here
At a Glance
A woman with leukemia is likely cured of HIV after receiving a transplant including stem cells from banked umbilical cord blood.
The result suggests a way to expand the pool of available stem cells for curing HIV in people who require transplants for other medical conditions.
HIV (yellow) attacks the immune system by destroying CD4+ T cells (red), a type of white blood cell that is vital to fighting off infection. NIAID
Three cases of HIV being cured have been reported to date. All three involved men with HIV and either leukemia or lymphoma. The men received transplants of stem cells from adult donors to treat their cancers. The stem cell donors all carried two copies of a mutation, CCR5 Δ32, that confers resistance to HIV. CCR5 is a receptor that HIV uses to infect cells. But very few people carry two copies of CCR5 Δ32, limiting the chances of finding a compatible donor, particularly for non-White patients.
A team led by Drs. Jingmei Hsu at Weill Cornell Medicine, Yvonne Bryson of the University of California, Los Angeles, and Deborah Persaud of the Johns Hopkins University School of Medicine used a modified approach to try to cure HIV. Their patient was a middle-aged woman who self-identified as mixed-race and had both HIV and fast-progressing, or acute, leukemia. Because of the difficulty in finding a compatible adult donor, the transplant included stem cells from banked umbilical cord blood. Umbilical cord blood stem cells don’t need as close a match for successful transplant as adult stem cells. If successful, this approach could expand the pool of CCR5 Δ32 stem cells available to those living with HIV.
The researchers obtained cord blood stem cells with two CCR5 Δ32 mutations that were a partial match for the patient. A challenge with transplanting cord blood stem cells is that it takes time for the cells to engraft in the body. So, the researchers infused the cord blood cells alongside stem cells from a relative of the patient. | yes |
Virology | Can the human immunodeficiency virus (HIV) be cured? | yes_statement | "hiv" can be "cured".. there is a "cure" for "hiv". | https://www.pbs.org/wgbh/nova/article/hiv-aids-cure/ | How a select few people have been cured of HIV | NOVA | PBS | Related
That said, such cures are the result of treatments too toxic to attempt on all but a select few. So while they provide a scientific roadmap toward success, they do not necessarily make researchers’ job any easier as they work to develop alternatives: safe, effective and, crucially, scalable therapies to cure HIV.
“HIV has been a tough nut to crack,” says Marshall Glesby, an infectious disease specialist at Weill Cornell Medicine in New York City and a coauthor of one of the recent HIV cure case studies. “But there is incremental progress being made in terms of our understanding of where the virus hides within the body and potential ways to purge it from those sites.”
The HIV cure research field is yet quite young. And it likely never would have ballooned as it has in recent years were it not for the very first successful cure—one that served as a catalyst and guiding light for scientists.
A transformative success
During the late 1990s and early 2000s, the HIV research establishment focused the lion’s share of its energy and resources on treatment and prevention of the virus. Actually curing HIV was generally regarded as a distant dream, with only a small set of researchers pursuing such a goal.
Then, in 2008, German scientists announced the first case of what would ultimately be deemed a successful cure of the virus. This proof of concept ignited the field and sent financial investment soaring—to $337 million in non–pharmaceutical industry funding in 2020, according to the HIV nonprofit AVAC.
Clinicians were able to cure HIV in an American man living in Berlin named Timothy Ray Brown, by exploiting the fact that he had also been diagnosed with acute myeloid leukemia, or AML. This made Brown a candidate for a stem cell (bone marrow) transplant to treat his blood cancer.
Brown’s treatment team relied on the existence of a rare genetic abnormality found among people with northern European ancestry. Known as the CCR5-delta32 mutation, it gives rise to immune cells lacking a certain coreceptor called CCR5 on their surface. This is a hook to which HIV typically latches to begin the process of infecting an immune cell and hijacking its machinery to manufacture new copies of the virus.
The clinicians found a stem cell donor who was not only a good genetic match for Brown, but who also had the CCR5-delta32 mutation. First they destroyed Brown’s immune system with full-dose chemotherapy and full-body radiation. Then they effectively gave him the donor’s immune system through the stem cell transplant. This cured his HIV by ensuring that any remaining virus in his body was incapable of infecting his new immune cells.
Variations of this method have yielded cures, or likely cures, in four other people during the years since. These cases provide researchers with increasing certainty that it is possible to achieve the ultimate goal: a sterilizing cure, in which the body has been rid of every last copy of virus capable of producing viable new copies of itself.
“It was not a given that if you completely replace the immune system, even with a purportedly non-susceptible immune system, that you would cure infection,” says Louis Picker, associate director of the Vaccine and Gene Therapy Institute at the Oregon Health & Science University. “It was possible that HIV could be hiding in non-immune cells, like endothelial cells, and still find targets to infect.”
But the small cohort of people who have been cured or likely cured to date, Picker says, “show that’s not the case.”
Nevertheless, these successes have not opened the door to a cure for HIV available to much more than a few of the estimated 38 million people living with the virus worldwide. Critically, it is unethical to provide such a dangerous and toxic treatment to anyone who does not already qualify for a stem cell transplant to treat blood cancer or another health condition.
Brown, for one, nearly died from his treatment. And a number of efforts to repeat his case have failed.
Why is HIV so hard to cure?
Highly effective treatment for HIV hit the market in 1996, transforming what was once a death sentence into a manageable health condition. Today, the therapy, a combination of drugs called antiretrovirals, is so safe, tolerable and effective, that it has extended recipients’ life expectancy to near normal. But despite the fact that these medications can inhibit viral replication to such a degree that it’s undetectable by standard tests, they cannot eradicate HIV from the body.
Standing in the way is what’s known as the HIV reservoir.
This viral reservoir is composed in large part of long-lived immune cells that enter a resting, or latent, state. Antiretrovirals only target cells that are actively producing new copies of the virus. So when HIV has infected a cell that is in a non-replicating state, the virus remains under the radar of these medications. Stop the treatment, and at any moment, any of these cells, which clone themselves, can restart their engines and repopulate the body with HIV.
Support Provided By
This phenomenon is why people with HIV typically experience a viral rebound within a few weeks of stopping their antiretrovirals. And it is the reason why, given the harm such viral replication causes the body, those living with HIV must remain on treatment for the virus indefinitely to mitigate the deleterious impactsof the infection.
“A key new advance is the finding that those cells which harbor the virus seem resistant to dying, a problem with cancer cells,” HIV cure researcher Steven Deeks, a professor of medicine at University of California, San Francisco, says of the viral reservoir. “We will be leveraging new cancer therapies aimed at targeting these resilient, hard-to-kill cells.”
Follow-up acts
Brown stood alone on his pedestal for over a decade.
Then, at the 2019 Conference on Retroviruses and Opportunistic Infections (CROI) in Seattle, researchers announced two new case studies of men with blood cancer and HIV who had received treatments similar to Brown’s. The men, known as the Düsseldorf and London patients, were treated for Hodgkin lymphoma and AML, respectively. By the time of the conference, both had spent extended periods off of antiretroviral treatment without a viral rebound.
To this day, neither man has experienced a viral rebound—leading the authors of the London and Düsseldorf case studies recently to assert that they are “definitely” and “almost definitely” cured, respectively.
In February 2022, a team of researchers reported at CROI, held virtually, the first possible case of an HIV cure in a woman. The treatment she received for her leukemia represented an important scientific advance.
Called a haplo-cord transplant, this cutting-edge approach to treating blood cancer was developed to compensate for the difficulty of finding a close genetic match in the stem cell donor–which is traditionally needed to provide the best chance that the stem cell transplant will work properly. Such an effort is made even more challenging when attempting to cure HIV, because the CCR5-delta32 mutation is so rare.
The American woman received a transplant of umbilical cord blood from a baby, who had the genetic mutation, followed by a transplant of stem cells from an adult, who did not. While each donor was only a partial match, the combination of the two transplants was meant to compensate for this less-than-ideal scenario. The result was the successful blooming of a new, HIV-resistant immune system.
The authors of the woman’s case study, including Weill Cornell’s Marshall Glesby, estimate that this new method could expand the number of candidates for HIV cure treatment to about 50 per year.
In July, at the International AIDS Conference in Montreal, researchers announced the case of a fifth person possibly cured of HIV. Diagnosed with the virus in 1988 and 63 years old at the time of his stem cell transplant three years ago, the American man is the oldest to have achieved potential success with such a treatment and the one living with the virus for the longest. Because of his age, he received reduced intensity chemotherapy to treat his AML. Promisingly, he still beat both the cancer and the virus.
The lead author of this man’s case study, Jana K. Dickter, an associate clinical professor of infectious disease at City of Hope in Duarte, California, says that such cases provide a guide for researchers. “If we are able to successfully modify the CCR5 receptors from T cells for people living with HIV,” she says, “then there is a possibility we can cure a person from their HIV infection.”
Scientists also know of two women whose own immune systems, in an extraordinary feat, appear to have cured them of HIV. Both are among the approximately 1 in 200 people with HIV, known as elite controllers, whose immune systems are able to suppress replication of the virus to low levels without antiretroviral treatment.
Researchers believe that these women’s immune systems managed to preferentially eliminate immune cells infected with viral DNA capable of producing viable new virus, ultimately succeeding in eradicating every last such copy.
The search for the holy grail
As they seek safer and more broadly applicable therapeutic options than the stem cell transplant approach, HIV cure researchers are pursuing a variety of avenues.
Some investigators are developing genetic treatments in which, for example, they attempt to edit an individual’s own immune cells to make them lack the CCR5 coreceptor.
“The science that I am particularly excited about and that we and others are working on is to make this treatment as an in vivo deliverable therapy that would not rely on transplant centers and could ultimately be given in an outpatient setting,” says Hans-Peter Kiem, director of the stem cell and gene therapy program at the Fred Hutchinson Cancer Center in Seattle.
Then there is what’s known as the “shock and kill” method, in which drugs are used to flush the virus from the reservoir and other treatments are then used to kill off the infected cells. Conversely, “block and lock” attempts to freeze the reservoir cells in a latent state for good. Researchers are also developing therapeutic vaccines that would augment the immune response to the virus.
“Progress will be incremental and slow,” Picker predicts, “unless there is a discovery from left field—an unpredictable advance that revolutionizes the field. I do think it will happen. My personal goal is to be a very good left fielder.”
Correction: Dr. Glesby's quote in the fourth paragraph was initially published with a typo, saying "track" instead of "crack."
This reporting was supported by the Global Health Reporting Center.
Receive emails about upcoming NOVA programs and related content, as well as featured reporting about current events through a science lens. | ,” says Louis Picker, associate director of the Vaccine and Gene Therapy Institute at the Oregon Health & Science University. “It was possible that HIV could be hiding in non-immune cells, like endothelial cells, and still find targets to infect.”
But the small cohort of people who have been cured or likely cured to date, Picker says, “show that’s not the case.”
Nevertheless, these successes have not opened the door to a cure for HIV available to much more than a few of the estimated 38 million people living with the virus worldwide. Critically, it is unethical to provide such a dangerous and toxic treatment to anyone who does not already qualify for a stem cell transplant to treat blood cancer or another health condition.
Brown, for one, nearly died from his treatment. And a number of efforts to repeat his case have failed.
Why is HIV so hard to cure?
Highly effective treatment for HIV hit the market in 1996, transforming what was once a death sentence into a manageable health condition. Today, the therapy, a combination of drugs called antiretrovirals, is so safe, tolerable and effective, that it has extended recipients’ life expectancy to near normal. But despite the fact that these medications can inhibit viral replication to such a degree that it’s undetectable by standard tests, they cannot eradicate HIV from the body.
Standing in the way is what’s known as the HIV reservoir.
This viral reservoir is composed in large part of long-lived immune cells that enter a resting, or latent, state. Antiretrovirals only target cells that are actively producing new copies of the virus. So when HIV has infected a cell that is in a non-replicating state, the virus remains under the radar of these medications. Stop the treatment, and at any moment, any of these cells, which clone themselves, can restart their engines and repopulate the body with HIV.
| no |
Virology | Can the human immunodeficiency virus (HIV) be cured? | yes_statement | "hiv" can be "cured".. there is a "cure" for "hiv". | https://news.ohsu.edu/2023/05/25/ohsu-research-offers-clues-for-potential-widespread-hiv-cure-in-people | OHSU research offers clues for potential widespread HIV cure in ... | OHSU research offers clues for potential widespread HIV cure in people
Oregon Health & Science University researcher Jonah Sacha, Ph.D., led a nonhuman primate study that helped explain how five people who underwent stem cell transplants have been cured of HIV. The study’s findings may bring scientists closer to developing what they hope will become a widespread cure for the virus that causes AIDS. (OHSU/Christine Torres Hicks)
New research from Oregon Health & Science University is helping explain why at least five people have become HIV-free after receiving a stem cell transplant. The study’s insights may bring scientists closer to developing what they hope will become a widespread cure for the virus that causes AIDS, which has infected about 38 million people worldwide.
Published today in the journal Immunity, the OHSU-led study describes how two nonhuman primates were cured of the monkey form of HIV after receiving a stem cell transplant. It also reveals that two circumstances must co-exist for a cure to occur and documents the order in which HIV is cleared from the body — details that can inform efforts to make this cure applicable to more people.
“Five patients have already demonstrated that HIV can be cured,” said the study’s lead researcher, Jonah Sacha, Ph.D., a professor at OHSU’s Oregon National Primate Research Center and Vaccine and Gene Therapy Institute.
“This study is helping us home in on the mechanisms involved in making that cure happen,” Sacha continued. “We hope our discoveries will help to make this cure work for anyone, and ideally through a single injection instead of a stem cell transplant.”
The first known case of HIV being cured through a stem cell transplant was reported in 2009. A man who was living with HIV was also diagnosed with acute myeloid leukemia, a type of cancer, and underwent a stem cell transplant in Berlin, Germany. Stem cell transplants, which are also known as bone marrow transplants, are used to treat some forms of cancer. Known as the Berlin patient, he received donated stem cells from someone with a mutated CCR5 gene, which normally codes for a receptor on the surface of white blood cells that HIV uses to infect new cells. A CCR5 mutation makes it difficult for the virus to infect cells, and can make people resistant to HIV. Since the Berlin patient, four more people have been similarly cured.
This study was conducted with a species of nonhuman primate known as Mauritian cynomolgus macaques, which the research team previously demonstrated can successfully receive stem cell transplants. While all of the study’s eight subjects had HIV, four of them underwent a transplant with stem cells from HIV-negative donors, and the other half served as the study’s controls and went without transplants.
Of the four that received transplants, two were cured of HIV after successfully being treated for graft-versus-host disease, which is commonly associated with stem cell transplants.
Richard Maziarz, M.D. (OHSU)
Other researchers have tried to cure nonhuman primates of HIV using similar methods, but this study marks the first time that HIV-cured research animals have survived long term. Both remain alive and HIV-free today, about four years after transplantation. Sacha attributes their survival to exceptional care from Oregon National Primate Research Center veterinarians and the support of two study coauthors, OHSU clinicians who care for people who undergo stem cell transplants: Richard T. Maziarz, M.D., and Gabrielle Meyers, M.D.
Gabrielle Meyers, M.D. (OHSU)
“These results highlight the power of linking human clinical studies with pre-clinical macaque experiments to answer questions that would be almost impossible to do otherwise, as well as demonstrate a path forward to curing human disease,” said Maziarz, a professor of medicine in the OHSU School of Medicine and medical director of the adult blood and marrow stem cell transplant and cellular therapy programs in the OHSU Knight Cancer Institute.
The how behind the cure
Although Sacha said it was gratifying to confirm stem cell transplantation cured the nonhuman primates, he and his fellow scientists also wanted to understand how it worked. While evaluating samples from the subjects, the scientists determined there were two different, but equally important, ways they beat HIV.
First, the transplanted donor stem cells helped kill the recipients’ HIV-infected cells by recognizing them as foreign invaders and attacking them, similar to the process of graft-versus-leukemia that can cure people of cancer.
Second, in the two subjects that were not cured, the virus managed to jump into the transplanted donor cells. A subsequent experiment verified that HIV was able to infect the donor cells while they were attacking HIV. This led the researchers to determine that stopping HIV from using the CCR5 receptor to infect donor cells is also needed for a cure to occur.
The researchers also discovered that HIV was cleared from the subjects’ bodies in a series of steps. First, the scientists saw that HIV was no longer detectable in blood circulating in their arms and legs. Next, they couldn’t find HIV in lymph nodes, or lumps of immune tissue that contain white blood cells and fight infection. Lymph nodes in the limbs were the first to be HIV-free, followed by lymph nodes in the abdomen.
The step-wise fashion by which the scientists observed HIV being cleared could help physicians as they evaluate the effectiveness of potential HIV cures. For example, clinicians could focus on analyzing blood collected from both peripheral veins and lymph nodes. This knowledge may also help explain why some patients who have received transplants initially have appeared to be cured, but HIV was later detected. Sacha hypothesizes that those patients may have had a small reservoir of HIV in their abdominal lymph nodes that enabled the virus to persist and spread again throughout the body.
Sacha and colleagues continue to study the two nonhuman primates cured of HIV. Next, they plan to dig deeper into their immune responses, including identifying all of the specific immune cells involved and which specific cells or molecules were targeted by the immune system.
This research is supported by the National Institutes of Health (grants AI112433, AI129703, P51 OD011092) and the Foundation for AIDS Research (grant 108832), and the Foundation for AIDS Immune Research. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
In our interest of ensuring the integrity of our research and as part of our commitment to public transparency, OHSU actively regulates, tracks and manages relationships that our researchers may hold with entities outside of OHSU. In regard to this research, Dr. Sacha has a significant financial interest in CytoDyn, a company that may have a commercial interest in the results of this research and technology. Review details of OHSU's conflict of interest programto find out more about how we manage these business relationships.
All research involving animal subjects at OHSU must be reviewed and approved by the university’sInstitutional Animal Care and Use Committee (IACUC). The IACUC’s priority is to ensure the health and safety of animal research subjects. The IACUC also reviews procedures to ensure the health and safety of the people who work with the animals. No live animal work may be conducted at OHSU without IACUC approval.
Oregon Health & Science University researcher Jonah Sacha, Ph.D., led a nonhuman primate study that helped explain how five people who underwent stem cell transplants have been cured of HIV. The study’s findings may bring scientists closer to developing what they hope will become a widespread cure for the virus that causes AIDS. (OHSU/Christine Torres Hicks) | “Five patients have already demonstrated that HIV can be cured,” said the study’s lead researcher, Jonah Sacha, Ph.D., a professor at OHSU’s Oregon National Primate Research Center and Vaccine and Gene Therapy Institute.
“This study is helping us home in on the mechanisms involved in making that cure happen,” Sacha continued. “We hope our discoveries will help to make this cure work for anyone, and ideally through a single injection instead of a stem cell transplant.”
The first known case of HIV being cured through a stem cell transplant was reported in 2009. A man who was living with HIV was also diagnosed with acute myeloid leukemia, a type of cancer, and underwent a stem cell transplant in Berlin, Germany. Stem cell transplants, which are also known as bone marrow transplants, are used to treat some forms of cancer. Known as the Berlin patient, he received donated stem cells from someone with a mutated CCR5 gene, which normally codes for a receptor on the surface of white blood cells that HIV uses to infect new cells. A CCR5 mutation makes it difficult for the virus to infect cells, and can make people resistant to HIV. Since the Berlin patient, four more people have been similarly cured.
This study was conducted with a species of nonhuman primate known as Mauritian cynomolgus macaques, which the research team previously demonstrated can successfully receive stem cell transplants. While all of the study’s eight subjects had HIV, four of them underwent a transplant with stem cells from HIV-negative donors, and the other half served as the study’s controls and went without transplants.
Of the four that received transplants, two were cured of HIV after successfully being treated for graft-versus-host disease, which is commonly associated with stem cell transplants.
| yes |
Virology | Can the human immunodeficiency virus (HIV) be cured? | yes_statement | "hiv" can be "cured".. there is a "cure" for "hiv". | https://www.beintheknow.org/hiv-and-stis/understanding-hiv-and-aids/hiv-cure | HIV cure | Be in the KNOW | What are scientists working on for a cure?
Cure research is still at an early stage, but it is promising. Scientists are working on two types of research – a ‘functional cure’ and a ‘sterilising cure’. See the ‘In detail’ tab for more information.
What should I do until there is a cure?
Until there is a cure, people with HIV must take treatment. This can reduce the level of HIV in your body to such a low amount that you are unable to pass it on (an undetectable viral load).
Myths
Here we share the truth about some common HIV cure myths. Remember... treatment is the only way to keep yourself healthy when you have HIV.
Does having sex with a virgin cure HIV?
No. Having sex with a virgin will put them at risk of being infected with HIV if protection isn’t used. It is also a criminal offence to have sex with those under the age of consent, or those who have not consented to sex.
Can natural, herbal or alternative medicines cure HIV?
No. There is no natural, herbal or alternative cure for HIV. Antiretroviral treatment is the only medication that can control HIV. Taking herbal medicines can be dangerous as they will not protect your immune system from the damage caused by HIV. Some herbal medicines can also make antiretroviral treatment less effective.
Can a higher power, prayers or spells cure HIV?
No. Faith helps many people to deal with the difficulties of having HIV. But taking antiretroviral treatment is the only way you can stay healthy. Religion can be good for support, but you should also visit your healthcare worker for treatment and medical advice and take treatment as prescribed.
Are you cured if you have an undetectable viral load?
No. Some people who take their treatment well can achieve a viral load so low that it is called ‘undetectable’. This also means they can’t pass HIV on to others. However, this doesn’t mean that they’re cured, as HIV is still present in their body.
Does having no symptoms mean you’re cured?
No. HIV can exist in the body without displaying any symptoms for up to 10 or 15 years. So, you may have the virus for some time and feel fine. If you are on treatment and don’t have any symptoms, then your treatment is working at keeping your immune system strong.
Join the conversation
Why is it so hard to find a cure for HIV?
Due to the complex nature and structure of HIV, locating and quantifying the amount of virus in the body is very difficult.
HIV evades the immune system by staying dormant in infected T-cells until they are activated to respond to infections. This state is called latent infection. Some of these cells may live for decades without becoming activated. Cells that are latently infected are described as the `HIV reservoir`.
Detecting and eliminating these cells are the biggest challenges facing cure research.
What is a functional cure?
A functional cure would reduce the amount of HIV in the body to such low levels that it can’t be detected or make you ill. But, it wouldn’t completely get rid of the virus.
Some people think that ART is a functional cure. However, most agree that a true functional cure would suppress the virus without the need for people to take treatment for the rest of their lives.
There are a few examples of people considered to have been functionally cured, such as the Mississipi Baby. But, in all these cases the virus has re-emerged. Most of these people received antiretroviral treatment very quickly after infection or birth.
What is a sterilising cure?
A sterilising cure eradicates HIV from the body completely, including from hidden reservoirs.
There are only two people who have been cured in this way: Timothy Brown, also known as the 'Berlin patient', and Adam Castillejo, known as the ‘London patient’.
In 2007-08, Brown had chemotherapy and a bone marrow transplant to treat leukaemia. His transplant came from someone with a natural genetic resistance to HIV. Following the transplant, Brown appeared to be cured of HIV. Doctors later replicated the results on Castillejo. In 2020 they confirmed that, 30 months after stopping treatment, he was still HIV-free.
Despite the promising results from both cases, this type of procedure would not be suitable for most people living with HIV. This is because bone marrow transplants are very invasive and risky.
What are scientists working on for a cure?
Cure research is still at an early stage, but it is promising. Scientists are working on two types of research – a ‘functional cure’ and a ‘sterilising cure’. See the ‘In detail’ tab for more information.
What should I do until there is a cure?
Until there is a cure, people with HIV must take treatment. This can reduce the level of HIV in your body to such a low amount that you are unable to pass it on (an undetectable viral load).
Myths
Here we share the truth about some common HIV cure myths. Remember... treatment is the only way to keep yourself healthy when you have HIV.
Does having sex with a virgin cure HIV?
No. Having sex with a virgin will put them at risk of being infected with HIV if protection isn’t used. It is also a criminal offence to have sex with those under the age of consent, or those who have not consented to sex.
Can natural, herbal or alternative medicines cure HIV?
No. There is no natural, herbal or alternative cure for HIV. Antiretroviral treatment is the only medication that can control HIV. Taking herbal medicines can be dangerous as they will not protect your immune system from the damage caused by HIV. Some herbal medicines can also make antiretroviral treatment less effective.
Can a higher power, prayers or spells cure HIV?
No. Faith helps many people to deal with the difficulties of having HIV. But taking antiretroviral treatment is the only way you can stay healthy. Religion can be good for support, but you should also visit your healthcare worker for treatment and medical advice and take treatment as prescribed.
Are you cured if you have an undetectable viral load?
No. Some people who take their treatment well can achieve a viral load so low that it is called ‘undetectable’. This also means they can’t pass HIV on to others. However, this doesn’t mean that they’re cured, as HIV is still present in their body.
Does having no symptoms mean you’re cured?
No. HIV can exist in the body without displaying any symptoms for up to 10 or 15 years. So, you may have the virus for some time and feel fine. If you are on treatment and don’t have any symptoms, then your treatment is working at keeping your immune system strong.
Join the conversation
In detail
Why is it so hard to find a cure for HIV?
Due to the complex nature and structure of HIV, locating and quantifying the amount of virus in the body is very difficult.
HIV evades the immune system by staying dormant in infected T-cells until they are activated to respond to infections. This state is called latent infection. Some of these cells may live for decades without becoming activated. Cells that are latently infected are described as the `HIV reservoir`.
Detecting and eliminating these cells are the biggest challenges facing cure research.
What is a functional cure?
A functional cure would reduce the amount of HIV in the body to such low levels that it can’t be detected or make you ill. But, it wouldn’t completely get rid of the virus.
Some people think that ART is a functional cure. However, most agree that a true functional cure would suppress the virus without the need for people to take treatment for the rest of their lives.
There are a few examples of people considered to have been functionally cured, such as the Mississipi Baby. But, in all these cases the virus has re-emerged. Most of these people received antiretroviral treatment very quickly after infection or birth.
What is a sterilising cure?
A sterilising cure eradicates HIV from the body completely, including from hidden reservoirs.
There are only two people who have been cured in this way: Timothy Brown, also known as the 'Berlin patient', and Adam Castillejo, known as the ‘London patient’.
In 2007-08, Brown had chemotherapy and a bone marrow transplant to treat leukaemia. His transplant came from someone with a natural genetic resistance to HIV. Following the transplant, Brown appeared to be cured of HIV. Doctors later replicated the results on Castillejo. In 2020 they confirmed that, 30 months after stopping treatment, he was still HIV-free.
Despite the promising results from both cases, this type of procedure would not be suitable for most people living with HIV. This is because bone marrow transplants are very invasive and risky. | HIV can exist in the body without displaying any symptoms for up to 10 or 15 years. So, you may have the virus for some time and feel fine. If you are on treatment and don’t have any symptoms, then your treatment is working at keeping your immune system strong.
Join the conversation
Why is it so hard to find a cure for HIV?
Due to the complex nature and structure of HIV, locating and quantifying the amount of virus in the body is very difficult.
HIV evades the immune system by staying dormant in infected T-cells until they are activated to respond to infections. This state is called latent infection. Some of these cells may live for decades without becoming activated. Cells that are latently infected are described as the `HIV reservoir`.
Detecting and eliminating these cells are the biggest challenges facing cure research.
What is a functional cure?
A functional cure would reduce the amount of HIV in the body to such low levels that it can’t be detected or make you ill. But, it wouldn’t completely get rid of the virus.
Some people think that ART is a functional cure. However, most agree that a true functional cure would suppress the virus without the need for people to take treatment for the rest of their lives.
There are a few examples of people considered to have been functionally cured, such as the Mississipi Baby. But, in all these cases the virus has re-emerged. Most of these people received antiretroviral treatment very quickly after infection or birth.
What is a sterilising cure?
A sterilising cure eradicates HIV from the body completely, including from hidden reservoirs.
There are only two people who have been cured in this way: Timothy Brown, also known as the 'Berlin patient', and Adam Castillejo, known as the ‘London patient’.
In 2007-08, Brown had chemotherapy and a bone marrow transplant to treat leukaemia. His transplant came from someone with a natural genetic resistance to HIV. Following the transplant, Brown appeared to be cured of HIV. | yes |
Virology | Can the human immunodeficiency virus (HIV) be cured? | yes_statement | "hiv" can be "cured".. there is a "cure" for "hiv". | https://capus.dph.ga.gov/ehe/myths/ | HIV/AIDS Myths and Facts | Ending The Epidemic | HIV/AIDS Myths and Facts
With so much information out there about HIV/AIDS, it may be hard to tell what is true and what is not. Here we will discuss those myths to make sure you know the truth about how you get HIV, treatment avaliable, and life for those diagnosed with HIV.
What is a MYTH? A myth is untrue or false information. What is a FACT? A fact is true information that can be verified through a credible source.
MYTH: HIV or AIDS can cured.
FACT: There is no cure for HIV/AIDS. Treatments are available, but they do not cure the disease itself.
MYTH: “HIV/AIDS is a death sentence.”
FACT: Currently, there are over 35 FDA approved medications to treat HIV/AIDS. These medications, primarily known as anti-retroviral therapy, allow HIV positive individuals to live a full and healthy life after diagnosis and early treatment.
MYTH: “If I take birth control, I won’t get HIV.”
FACT: Birth control does not protect you against HIV. It is important to use protection when engaging in any type of sexual activity.
MYTH: “Women who are HIV positive can’t — and shouldn’t — have babies.”
FACT: There are a number of options for women who are HIV positive to have perfectly normal and healthy babies. HIV positive women who become pregnant are encouraged to speak with their doctor or nurse about the best treatment options available. Early prenatal care is important to reduce the likelihood of mother to child transmission.
MYTH: “It’s okay to have unprotected sex if you and your partner are both positive.”
FACT: Different strains of HIV among partners can result in superinfection, which is when two strains combine and alter the virus. Use of a new condom for each sexual act along with medication adherence minimizes the chance of superinfection.
MYTH: “I can’t get HIV because I’m not gay/black/a drug user.”
FACT: HIV affects people from all backgrounds regardless of age, race, ethnicity, gender, or sexual orientation.
MYTH: “I can’t get HIV because I’m in a monogamous relationship.”
FACT: It is important to engage in honest and open conversations about monogamy with your partner and get tested together.
MYTH: My partner tested negative for HIV. That means we don’t need to have safer sex.
FACT: Remember to always negotiate condom use with any partner and to get tested along with your partner to reduce the likelihood of transmitting HIV. The only way to know for sure is if you’re both tested and engage in open/honest discussions about your relationship and STDs.
MYTH: Faithful and loving partners do not spread HIV.
FACT: People hold different views about what it means to be “faithful” and “loving,” so it is critical not to assume your definition is the same as your partner’s. It is important for you and your partner to get tested together and to engage in honest and open conversations about your relationship and what you expect from each other.
MYTH: When you’re on HIV therapy, you can't transmit the virus to anyone else.
FACT: HIV treatment reduces the chance of passing HIV by 96%, but there is a 4% chance of transmission between an infected (virally suppressed) and uninfected partner.
MYTH: Since I only have oral sex, I'm not at risk for HIV/AIDS.
FACT: Although studies show you have a considerable lower risk of getting HIV through oral sex, there is still a possibility, especially if the receptive partner has had recent dental work or has open sores/wounds.
MYTH: I would know if a loved one had HIV by looking at them.
FACT: You cannot tell if someone has HIV by looking at them, people can be infected with HIV for up to 10 years or more and still show no symptoms.
MYTH: “I can’t get HIV if I have a STD.”
FACT: STDs including HIV have the same primary transmission method, so the same activities that place you at risk for STDs place you at risk for HIV. Having an STD also increases your chances of HIV infection because of breaks or tears in the genital tract lining or skin.
FACT: HIV CANNOT be spread through:
Saliva, such as through kissing or sharing eating utensils
Hugging or shaking hands with someone who is HIV positive
Sharing exercise equipment or playing sports with an HIV positive person
Touching a toilet seat or doorknob handle after an HIV positive person
Drinking from a public water fountain
Always make sure your health information comes from a credible source such as the Georgia Department of Public Health, the Centers for Disease Control and Prevention, or AIDS.gov. | HIV/AIDS Myths and Facts
With so much information out there about HIV/AIDS, it may be hard to tell what is true and what is not. Here we will discuss those myths to make sure you know the truth about how you get HIV, treatment avaliable, and life for those diagnosed with HIV.
What is a MYTH? A myth is untrue or false information. What is a FACT? A fact is true information that can be verified through a credible source.
MYTH: HIV or AIDS can cured.
FACT: There is no cure for HIV/AIDS. Treatments are available, but they do not cure the disease itself.
MYTH: “HIV/AIDS is a death sentence.”
FACT: Currently, there are over 35 FDA approved medications to treat HIV/AIDS. These medications, primarily known as anti-retroviral therapy, allow HIV positive individuals to live a full and healthy life after diagnosis and early treatment.
MYTH: “If I take birth control, I won’t get HIV.”
FACT: Birth control does not protect you against HIV. It is important to use protection when engaging in any type of sexual activity.
MYTH: “Women who are HIV positive can’t — and shouldn’t — have babies.”
FACT: There are a number of options for women who are HIV positive to have perfectly normal and healthy babies. HIV positive women who become pregnant are encouraged to speak with their doctor or nurse about the best treatment options available. Early prenatal care is important to reduce the likelihood of mother to child transmission.
MYTH: “It’s okay to have unprotected sex if you and your partner are both positive.”
FACT: Different strains of HIV among partners can result in superinfection, which is when two strains combine and alter the virus. Use of a new condom for each sexual act along with medication adherence minimizes the chance of superinfection.
MYTH: “I can’t get HIV because I’m not gay/black/a drug user.”
FACT: | no |
Virology | Can the human immunodeficiency virus (HIV) be cured? | yes_statement | "hiv" can be "cured".. there is a "cure" for "hiv". | https://www.insti.com/hiv-cure-2022-update/ | Is There an HIV Cure? (2022 Update) | INSTI | Is There an HIV Cure? (2022 Update)
So, Is There an HIV Cure?
HIV was first identified forty years ago, and since then, the medical community has made significant progress with testing, treatment, and developing a vaccine and a cure. While there is no cure or vaccine yet, researchers have recently made excellent headway using gene therapy and other avenues.
Different Pathways to a Cure
Researchers and scientists believe that the world will find a cure for HIV, but there are different pathways for a cure.
A functional cure can reduce HIV in the body to levels that it can’t be detected or make someone sick, but it does not completely get rid of the virus from a body. While some may consider the current treatments (ART, or antiretroviral treatment) as a functional cure, ideally, a functional cure would suppress the virus without the need to take drugs for the rest of an infected person’s life.
A sterilising cure, however, would eradicate the virus from the body. This cure would include removing HIV from hidden reservoirs in the body – that is, from cells infected with HIV in the early stages but are not actively producing HIV in the body.
HIV Vaccines
There are no vaccines for HIV yet, but research is continuing to develop one. One set of research that is ongoing is through Duke University’s Human Vaccine Institute. Derek Cain’s team has focused on a subset of HIV patients (fewer than one-third) who eventually develop specialized antibodies that can neutralize HIV after infection. If a vaccine can induce these antibodies, there is hope that they could destroy HIV before it can take hold in an infected person.
While COVID-19 has had a negative impact on the world, there is some good news to come out of the ongoing pandemic. Based on the molecule that instructs our cells to make specific proteins, the mRNA COVID-19 vaccine has shown the possibilities of this technology, previously viewed with some skepticism about its effectiveness. The successful roll-out of COVID vaccines opened up the possibility of using this technology for other diseases such as HIV. However, it is still acknowledged that an HIV vaccine will be complicated due to the nature of the virus itself, which becomes part of the human genome 72 hours after transmission.
With the recent news that Moderna will start human trials for its mRNA HIV vaccine, it appears the fight to end HIV as a global endemic and public health crisis has received a boost. The mRNA vaccine is intended to prime B cells that have the potential to produce highly potent neutralizing antibodies by working to target the virus’s envelope to keep the virus from entering and infecting cells. The envelope is the virus’s outermost layer that acts as protection for its genetic material. The trials will test the safety of the different experimental vaccines.
HIV Cure Research Approaches
There are a few different approaches to research cures. While each is promising, as of yet, there is no cure.
Activate and eradicate – aims to flush the virus out of the reservoirs and kill any cell it infects – this is sometimes known as “shock and kill”
Gene editing – this is about changing cells so that HIV cannot infect cells in the body
Immune modulation – this method permanently changes the immune system to better fight against HIV
Stem cell transplants – this approach replaces a person’s infected immune system with a donor immune system
There Have Been Two Cases of People Cured of HIV
There are two cases where researchers cured HIV entirely, both as part of the sterilising approach.
The first in was Timothy Brown (also known as the Berlin Patient), who received chemotherapy and a bone marrow transplant as part of his leukemia treatment in 2007. The transplant was from a donor who had a natural resistance to HIV, and following Brown’s transplant, he appeared to be free of HIV.
Following this, doctors replicated this result on another patient, Adam Castillejo, or the London Patient, where following his transplant, became HIV-free. As of 2020, 30 months after stopping treatment, Adam was still HIV-free.
Does This All Mean We Will See an HIV Cure in 2021?
Well, this September, the FDA approved the first human trial investigating CRISPR gene editing as an HIV cure. And while this doesn’t mean we will see a cure immediately, this showcases the progress researchers and scientists have made towards ending HIV as a global health threat.
Excision BioTherapeutics will begin trials, a first-in-human Phase I/II trial, to evaluate the safety, tolerability, and efficacy of EBT-101 as a potential functional cure in healthy individuals living with HIV. EBT-101 uses CRISPR to excise HIV wrapped around DNA in cells, which has been challenging to treat and is primarily why past curative efforts have not succeeded. Harnessing adeno-associated virus (AAV) at a relatively low rate, this therapy uses one dose to deliver treatment.
It is great to see that both a vaccine and a cure are possible and even likely in our futures with these advancements. | Is There an HIV Cure? (2022 Update)
So, Is There an HIV Cure?
HIV was first identified forty years ago, and since then, the medical community has made significant progress with testing, treatment, and developing a vaccine and a cure. While there is no cure or vaccine yet, researchers have recently made excellent headway using gene therapy and other avenues.
Different Pathways to a Cure
Researchers and scientists believe that the world will find a cure for HIV, but there are different pathways for a cure.
A functional cure can reduce HIV in the body to levels that it can’t be detected or make someone sick, but it does not completely get rid of the virus from a body. While some may consider the current treatments (ART, or antiretroviral treatment) as a functional cure, ideally, a functional cure would suppress the virus without the need to take drugs for the rest of an infected person’s life.
A sterilising cure, however, would eradicate the virus from the body. This cure would include removing HIV from hidden reservoirs in the body – that is, from cells infected with HIV in the early stages but are not actively producing HIV in the body.
HIV Vaccines
There are no vaccines for HIV yet, but research is continuing to develop one. One set of research that is ongoing is through Duke University’s Human Vaccine Institute. Derek Cain’s team has focused on a subset of HIV patients (fewer than one-third) who eventually develop specialized antibodies that can neutralize HIV after infection. If a vaccine can induce these antibodies, there is hope that they could destroy HIV before it can take hold in an infected person.
While COVID-19 has had a negative impact on the world, there is some good news to come out of the ongoing pandemic. Based on the molecule that instructs our cells to make specific proteins, the mRNA COVID-19 vaccine has shown the possibilities of this technology, previously viewed with some skepticism about its effectiveness. | no |
Virology | Can the human immunodeficiency virus (HIV) be cured? | no_statement | "hiv" cannot be "cured".. there is no "cure" for "hiv". | https://scienceofhiv.org/wp/cure/ | Cure – Science of HIV | HIV Cure
HIV is so difficult to cure because the virus persists inside stable reservoirs that cannot be detected by the immune system.
This animation, created in collaboration with TED Ed, provides an introduction on HIV and AIDS and antiretroviral therapy, and provides a brief explanation of why HIV has been so difficult to cure.
Antiretroviral Therapy and the Search for a Cure
Management of HIV/AIDS is achieved using combinations of antiretroviral drugs. There are numerous classes of drugs that target different aspects of the HIV life cycle, and therapy always involves taking two or more classes of drugs in combination.
The most commonly prescribed drugs include those that prevent the viral genome from being copied and incorporated into the cell’s DNA. Other drugs prevent the virus from maturing, or block viral fusion, causing HIV to be unable to infect new cells in the body.
Antiretroviral therapy is highly effective at managing the levels of HIV. Continued use has been shown to keep HIV-infected individuals from ever progressing to AIDS, and can lower the viral count to nearly undetectable levels. With antiretroviral therapy, most people can expect to live long and healthy lives.
Unfortunately, antiretroviral therapy is not a cure for HIV. This is due to HIV’s ability to hide its instructions inside of cells where drugs cannot reach it.
During the HIV life cycle, HIV incorporates itself into its host cell’s DNA. Antiretroviral therapies can stop new viruses that might be produced from infecting new cells, but can’t eliminate the viral DNA from the host cell’s genome.
Most host cells will be killed by infection or will eventually die of old age, but a very small number of cells appear to live for a very long time in the body. Every so often, the viral DNA can get turned on, and the cell starts to produce new virus. This is why medication adherence is critical. Stopping medication, even for a short time, might result in new cells being infected with HIV.
Researchers are working hard to find a true cure for HIV that could completely eradicate the virus from an infected person. Current directions include finding a means to activate cells that are harboring viral DNA, forcing them “out into the open” where they can then be targeted by antiretroviral drugs. Researchers are also looking into ways of using genetic tools to delete viral DNA from the cell’s DNA.
Latently Infected T-Cells
A major challenge to curing HIV is the virus’ ability to “hide” undetected in cells — a stage referred to as latency.During the HIV life cycle, HIV integrates itself into its host cell’s DNA. There it persists even when it is not being actively transcribed to make new viruses. These latent viruses can stay dormant for many years.
Antiretroviral therapies can stop new viruses that might be produced from infecting new cells but can’t eliminate viral DNA from the host cell’s genome. Some of these HIV-infected cells are long-lived CD4 memory T cells and serve as the HIV reservoirs. During the homeostatic proliferation of these memory T cells, the pool of latent HIV also gets copied.
When HIV-positive individuals are on combination antiretroviral therapy (cART), they can live relatively normal and healthy lives without developing AIDS. cART also decreases the risk of HIV transmission. However, cART requires lifelong adherence to these medications.
Management of HIV/AIDS is achieved using combinations of antiretroviral drugs. There are numerous classes of drugs that target different aspects of the HIV life cycle, and therapy always involves taking two or more classes of drugs in combination.
The most commonly prescribed drugs include those that prevent the viral genome from being copied and incorporated into the cell’s DNA. Other drugs prevent the virus from maturing, or block viral fusion, causing HIV to be unable to infect new cells in the body.
Antiretroviral therapy is highly effective at managing the levels of HIV. Continued use has been shown to keep HIV-infected individuals from ever progressing to AIDS, and can lower the viral count to nearly undetectable levels. With antiretroviral therapy, most people can expect to live long and healthy lives.
Unfortunately, antiretroviral therapy is not a cure for HIV. This is due to HIV’s ability to hide its instructions inside of cells where drugs cannot reach it.
During the HIV life cycle, HIV incorporates itself into its host cell’s DNA. Antiretroviral therapies can stop new viruses that might be produced from infecting new cells, but can’t eliminate the viral DNA from the host cell’s genome.
Most host cells will be killed by infection or will eventually die of old age, but a very small number of cells appear to live for a very long time in the body. Every so often, the viral DNA can get turned on, and the cell starts to produce new virus. This is why medication adherence is critical. Stopping medication, even for a short time, might result in new cells being infected with HIV.
Researchers are working hard to find a true cure for HIV that could completely eradicate the virus from an infected person. Current directions include finding a means to activate cells that are harboring viral DNA, forcing them “out into the open” where they can then be targeted by antiretroviral drugs. Researchers are also looking into ways of using genetic tools to delete viral DNA from the cell’s DNA.
Latency-Reversing Agents (LRA)
Latency-reversing agents are used to try to eliminate HIV reservoirs. This strategy attempts to flush the virus out of the resting cells by reawakening dormant viral DNA in the latent reservoirs. This approach is usually accompanied by a second step which aims to effectively clear the infected cells.
The most common class of latency-reversing agents are HDAC (histone deacetylase) inhibitors, which can force latently infected cells to produce viruses. Histones, which are proteins that DNA wraps around, can regulate what genes are actively transcribed. In some regions of the genome, chromatin (that is, DNA and its associated proteins) is tightly condensed. As a result, DNA in these regions are not available for the cell’s transcription machinery (such as DNA polymerase II, or Pol II), to read and copy, and thus they are not active. This is thought to be a major mechanism by which HIV can lie dormant in cells. Histone deacetylase inhibitors act to relax the chromatin and can thus enable genes on that segment of DNA to be turned on.
Some latency-reversing agents are known to produce significant toxicity and must be administered at low doses. Scientists are currently researching new LRA drugs that are both safe and effective.
Immune-Based Modulators
Therapeutic vaccines
There are two types of immune responses, referred to as innate and adaptive immunity. The innate response is the first line of defense against pathogens, and is considered to be more general and non-specific. The second line of defense, the adaptive immune response, recognizes specific pathogen fragments, called antigens. CD4 T cells, CD8 T cells and B cells are all part of the adaptive immune system. T cells and B cells are activated when presented with antigens, including by HIV antigens. HIV therapeutic vaccines expose an HIV-positive individual to HIV antigens that are designed to elicit a more effective adaptive immune response to the virus.
There are four strategies used to deliver non-infectious HIV antigens into the patient:
DNA and RNA vaccines: These genetic vaccines use DNA plasmids or mRNA that code for the antigen. They are then taken up by the patient’s cells, which then start to produce that specific antigen.
Protein or peptide vaccines: HIV proteins or protein fragments are delivered in this class of vaccine.
Dendritic cell vaccines: In this case, antigen-presenting dendritic cells are isolated from a patient and mixed with HIV antigens. These cells are then injected back into the patient.
What happens to antigens after they are introduced to an individual? Antigens are internalized and processed by immune system sentinels called antigen-presenting cells (specifically, dendritic cells and macrophages). These antigen fragments are then presented to helper T cells. Specific helper T cells can become activated after being presented with these antigens, causing them to activate and proliferate into clones. This army of helper T cells can release different signals that activate B cells to start producing antibodies. At the same time, cytotoxic T cells, which also recognize the same target antigen, are activated.
When cytotoxic T cells interact with infected cells that are displaying the specific target antigen on their surface (on proteins known as MHC1 receptors), the T cells produce granzymes and perforin which cause the infected cell to break down.
Broadly Neutralizing Antibodies - Passive Immunization
While many antibodies produced during an infection specifically recognize specific strains of a virus, other antibodies can recognize multiple virus strains. These types of antibodies are known as broadly neutralizing antibodies, or bnAbs. In the case of HIV, these antibodies can inhibit a broad array of different HIV isolates. Current vaccine research strategies focus on the induction of bnAb production.
Some bnAbs have been isolated from the B cells of HIV-positive individuals and sequenced. These bnAbs can then be manufactured and administered by subcutaneous injection or infusion to other infected individuals. The bnAbs can then recognize infected cells and target them for destruction by natural killer cells.
Gene and Cell Therapy
Researchers are also exploring other possible HIV cure approaches that focus on gene and cell therapies. The goal of gene therapy is to deliver therapeutic genes into a patient that will treat the disease. In cell therapy, living cells are transplanted into a patient to treat the disease.
One cell therapy approach involves engineering HIV target cells to render them resistant to HIV entry. Another approach is to modify cytotoxic T cells to selectively target and eliminate infected cells. Generally, these therapies rely on genetic modification to “edit” the blood cells of the patient so that they become resistant to HIV or become better at targeting and eliminating HIV and infected cells.
In gene therapy, anti-HIV genes are introduced into cells using a viral vector or an engineered nanoparticle. Gene therapies use different approaches to target HIV. In some cases, target genes are edited either by inserting a therapeutic sequence or by disrupting DNA sequences of proteins that are important in the HIV life cycle. Examples of gene editing methods include the use of TAL-effectors or zinc fingers with nuclease, and/or CRISPR-Cas9. With these techniques, a specific sequence of DNA is recognized and cut, and (in some cases) new DNA is introduced at the cut site.
A primary target for many gene therapy approaches is the CCR5 gene. CCR5 is a receptor found on the surface of white blood cells, including T cells, and is required by HIV to enter T cells. When this protein is absent (such as in individuals with a naturally occurring deletion, such as one called CCR5-Δ32), HIV cannot infect cells. An individual known as the “Berlin Patient” was cured of his leukemia and of HIV when he received a stem cell transplant from a donor who has a double CCR5-Δ32 deletion mutation. This mutation, however, is very rare. Many current cure approaches focus on introducing CCR5 deletions or mutations. One possible complication, however, is that a similar receptor to CCR5, called CXCR4, exists in white blood cells, and it is possible that HIV may be able to adapt in order to use CXCR4 to gain entry into cells in the absence of CCR5.
Gene Therapy using engineered CD8+ cells
Chimeric antigen receptors (CARs) are engineered receptor proteins. They are called chimeric because they are a combination of two different proteins. In this case, an antigen-binding domain “glued” to the signaling domain of T cells. The signaling domain of T cells is what gives white blood cells the signal to release biochemical compounds that kill pathogens or infected/mutated cells. CAR-T therapies have been successfully used to treat some cancers.
In the case of HIV, researchers have engineered T cells or natural killer cells to express a chimeric receptor that can selectively bind and kill infected cells that express HIV envelope protein. An early example of these CAR-T cells expressed a receptor consisting of the extracellular domain of CD4 fused with the signaling domain of cytotoxic T cells. Since CD4 is the receptor that binds HIV envelope protein, any infected cells with Env on their surface should be recognized by this CAR-T cell. This would then trigger a signaling event that leads to the release of toxic particles, killing the HIV-infected cell. Disappointingly, however, researchers found that CAR-T cells expressing the chimeric CD-4 receptor can become infected with HIV, and these efforts failed to significantly cure patients of HIV in clinical trials.
In an effort to improve the design of CAR-T cell therapy for HIV patients, researchers engineered a receptor that fused the binding domain of a monoclonal antibody (called scFv) that can specifically recognize HIV Env protein with the T cell receptor’s signaling domain.
Unfortunately, this bNAb-based CAR was not found to be effective therapeutically, and researchers are continuing to engineer new CARs that have started to show promising results. Using multiple antigen-recognizing domains in a single CAR, for example, has been shown to improve protection in animal studies. Researchers have recently designed a receptor with two CAR molecules created out of multiple receptors that can bind HIV — called duoCAR. In animal models, duoCAR therapy was able to successfully eliminate HIV-infected cells and resist HIV infection. This may be a viable approach for controlling viral loads and eliminating latent cells in HIV patients.
Lenacapavir
Lenacapavir is an HIV capsid inhibitor developed by Gilead Sciences. Currently, it is being evaluated in phase II/III trials for people with multidrug-resistant HIV. Laboratory studies indicate the drug is unaffected by mutations associated with resistance to most approved antiretroviral therapies. Watch the animation below to see how researchers think that Lenacapavir blocks HIV entry into the nucleus. | With antiretroviral therapy, most people can expect to live long and healthy lives.
Unfortunately, antiretroviral therapy is not a cure for HIV. This is due to HIV’s ability to hide its instructions inside of cells where drugs cannot reach it.
During the HIV life cycle, HIV incorporates itself into its host cell’s DNA. Antiretroviral therapies can stop new viruses that might be produced from infecting new cells, but can’t eliminate the viral DNA from the host cell’s genome.
Most host cells will be killed by infection or will eventually die of old age, but a very small number of cells appear to live for a very long time in the body. Every so often, the viral DNA can get turned on, and the cell starts to produce new virus. This is why medication adherence is critical. Stopping medication, even for a short time, might result in new cells being infected with HIV.
Researchers are working hard to find a true cure for HIV that could completely eradicate the virus from an infected person. Current directions include finding a means to activate cells that are harboring viral DNA, forcing them “out into the open” where they can then be targeted by antiretroviral drugs. Researchers are also looking into ways of using genetic tools to delete viral DNA from the cell’s DNA.
Latently Infected T-Cells
A major challenge to curing HIV is the virus’ ability to “hide” undetected in cells — a stage referred to as latency. During the HIV life cycle, HIV integrates itself into its host cell’s DNA. There it persists even when it is not being actively transcribed to make new viruses. These latent viruses can stay dormant for many years.
Antiretroviral therapies can stop new viruses that might be produced from infecting new cells but can’t eliminate viral DNA from the host cell’s genome. | no |
Virology | Can the human immunodeficiency virus (HIV) be cured? | no_statement | "hiv" cannot be "cured".. there is no "cure" for "hiv". | https://www.pbs.org/wgbh/nova/article/missing-hiv-cure/ | Why There's No HIV Cure Yet | NOVA | PBS | Why There’s No HIV Cure Yet
Over the past two years, the phrase “HIV cure” has flashed repeatedly across newspaper headlines. In March 2013, doctors from Mississippi reported that the disease had vanished in a toddler who was infected at birth. Four months later, researchers in Boston reported a similar finding in two previously HIV-positive men. All three were no longer required to take any drug treatments. The media heralded the breakthrough, and there was anxious optimism among HIV researchers. Millions of dollars of grant funds were earmarked to bring this work to more patients.
But in December 2013, the optimism evaporated. HIV had returned in both of the Boston men. Then, just this summer, researchers announced the same grim results for the child from Mississippi. The inevitable questions mounted from the baffled public. Will there ever be a cure for this disease? As a scientist researching HIV/AIDS, I can tell you there’s no straightforward answer. HIV is a notoriously tricky virus, one that’s eluded promising treatments before. But perhaps just as problematic is the word “cure” itself.
Support Provided By
Science has its fair share of trigger words. Biologists prickle at the words “vegetable” and “fruit”—culinary terms which are used without a botanical basis—chemists wrinkle their noses at “chemical free,” and physicists dislike calling “centrifugal” a force—it’s not; it only feels like one. If you ask an HIV researcher about a cure for the disease, you’ll almost certainly be chastised. What makes “cure” such a heated word?
It all started with a promise. In the early 1980s, doctors and public health officials noticed large clusters of previously healthy people whose immune systems were completely failing. The new condition became known as AIDS, for “acquired immunodeficiency syndrome.” A few years later, in 1984, researchers discovered the cause—the human immunodeficiency virus, now known commonly as HIV. On the day this breakthrough was announced, health officials assured the public that a vaccine to protect against the dreaded infection was only two years away. Yet here we are, 30 years later, and there’s still no vaccine. This turned out to be the first of many overzealous predictions about controlling the HIV epidemic or curing infected patients.
The progression from HIV infection to AIDS and eventual death occurs in over 99% of untreated cases—making it more deadly than Ebola or the plague. Despite being identified only a few decades ago, AIDS has already killed 25 million people and currently infects another 35 million, and the World Health Organization lists it as the sixth leading cause of death worldwide.
HIV disrupts the body’s natural disease-fighting mechanisms, which makes it particularly deadly and complicates efforts to develop a vaccine against it. Like all viruses, HIV gets inside individual cells in the body and highjacks their machinery to make thousands of copies of itself. HIV replication is especially hard for the body to control because the white blood cells it infects, and eventually kills, are a critical part of the immune system. Additionally, when HIV copies its genes, it does so sloppily. This causes it to quickly mutate into many different strains. As a result, the virus easily outwits the body’s immune defenses, eventually throwing the immune system into disarray. That gives other obscure or otherwise innocuous infections a chance to flourish in the body—a defining feature of AIDS.
Early Hope
In 1987, the FDA approved AZT as the first drug to treat HIV. With only two years between when the drug was identified in the lab and when it was available for doctors to prescribe, it was—and remains—the fastest approval process in the history of the FDA. AZT was widely heralded as a breakthrough. But as the movie
The Dallas Buyer’s Club
poignantly retells, AZT was not the miracle drug many hoped. Early prescriptions often elicited toxic side-effects and only offered a temporary benefit, as the virus quickly mutated to become resistant to the treatment. (Today, the toxicity problems have been significantly reduced, thanks to lower doses.) AZT remains a shining example of scientific bravura and is still an important tool to slow the infection, but it is far from the cure the world had hoped for.
In three decades, over 25 highly-potent drugs have been developed and FDA-approved to treat HIV.
Then, in the mid-1990s, some mathematicians began probing the data. Together with HIV scientists, they suggested that by taking three drugs together, we could avoid the problem of drug resistance. The chance that the virus would have enough mutations to allow it to avoid all drugs at once, they calculated, would simply be too low to worry about. When the first clinical trials of these “drug cocktails” began, both mathematical and laboratory researchers watched the levels of virus drop steadily in patients until they were undetectable. They extrapolated this decline downwards and calculated that, after two to three years of treatment, all traces of the virus should be gone from a patient’s body. When that happened, scientists believed, drugs could be withdrawn, and finally, a cure achieved. But when the time came for the first patients to stop their drugs, the virus again seemed to outwit modern medicine. Within a few weeks of the last pill, virus levels in patients’ blood sprang up to pre-treatment levels—and stayed there.
Related
In the three decades since, over 25 more highly-potent drugs have been developed and FDA-approved to treat HIV. When two to five of them are combined into a drug cocktail, the mixture can shut down the virus’s replication, prevent the onset of AIDS, and return life expectancy to a normal level. However, patients must continue taking these treatments for their entire lives. Though better than the alternative, drug regimens are still inconvenient and expensive, especially for patients living in the developing world.
Given modern medicine’s success in curing other diseases, what makes HIV different? By definition, an infection is cured if treatment can be stopped without the risk of it resurfacing. When you take a week-long course of antibiotics for strep throat, for example, you can rest assured that the infection is on track to be cleared out of your body. But not with HIV.
A Bad Memory
The secret to why HIV is so hard to cure lies in a quirk of the type of cell it infects. Our immune system is designed to store information about infections we have had in the past; this property is called “immunologic memory.” That’s why you’re unlikely to be infected with chickenpox a second time or catch a disease you were vaccinated against. When an infection grows in the body, the white blood cells that are best able to fight it multiply repeatedly, perfecting their infection-fighting properties with each new generation. After the infection is cleared, most of these cells will die off, since they are no longer needed. However, to speed the counter-attack if the same infection returns, some white blood cells will transition to a hibernation state. They don’t do much in this state but can live for an extremely long time, thereby storing the “memory” of past infections. If provoked by a recurrence, these dormant cells will reactivate quickly.
This near-immortal, sleep-like state allows HIV to persist in white blood cells in a patient’s body for decades. White blood cells infected with HIV will occasionally transition to the dormant state before the virus kills them. In the process, the virus also goes temporarily inactive. By the time drugs are started, a typical infected person contains millions of these cells with this “latent” HIV in them. Drug cocktails can prevent the virus from replicating, but they do nothing to the latent virus. Every day, some of the dormant white blood cells wake up. If drug treatment is halted, the latent virus particles can restart the infection.
Latent HIV’s near-immortal, sleep-like state allows it to persist in white blood cells in a patient’s body for decades.
HIV researchers call this huge pool of latent virus the “barrier to a cure.” Everyone’s looking for ways to get rid of it. It’s a daunting task, because although a million HIV-infected cells may seem like a lot, there are around a million times that many dormant white blood cells in the whole body. Finding the ones that contain HIV is a true needle-in-a-haystack problem. All that remains of a latent virus is its DNA, which is extremely tiny compared to the entire human genome inside every cell (about 0.001% of the size).
Defining a Cure
Around a decade ago, scientists began to talk amongst themselves about what a hypothetical cure could look like. They settled on two approaches. The first would involve purging the body of latent virus so that if drugs were stopped, there would be nothing left to restart the infection. This was often called a “sterilizing cure.” It would have to be done in a more targeted and less toxic way than previous attempts of the late 1990s, which, because they attempted to “wake up” all of the body’s dormant white blood cells, pushed the immune system into a self-destructive overdrive. The second approach would instead equip the body with the ability to control the virus on its own. In this case, even if treatment was stopped and latent virus reemerged, it would be unable to produce a self-sustaining, high-level infection. This approach was referred to as a “functional cure.”
The functional cure approach acknowledged that latency alone was not the barrier to a cure for HIV. There are other common viruses that have a long-lived latent state, such as the Epstein-Barr virus that causes infectious mononucleosis (“mono”), but they rarely cause full-blown disease when reactivated. HIV is, of course, different because the immune system in most people is unable to control the infection.
The first hint that a cure for HIV might be more than a pipe-dream came in 2008 in a fortuitous human experiment later known as the “Berlin patient.” The Berlin patient was an HIV-positive man who had also developed leukemia, a blood cancer to which HIV patients are susceptible. His cancer was advanced, so in a last-ditch effort, doctors completely cleared his bone marrow of all cells, cancerous and healthy. They then transplanted new bone marrow cells from a donor.
Fortunately for the Berlin patient, doctors were able to find a compatible bone marrow donor who carried a unique HIV-resistance mutation in a gene known as CCR5. They completed the transplant with these cells and waited.
For the last five years, the Berlin patient has remained off treatment without any sign of infection. Doctors still cannot detect any HIV in his body. While the Berlin patient may be cured, this approach cannot be used for most HIV-infected patients. Bone marrow transplants are extremely risky and expensive, and they would never be conducted in someone who wasn’t terminally ill—especially since current anti-HIV drugs are so good at keeping the infection in check.
Still, the Berlin patient was an important proof-of-principle case. Most of the latent virus was likely cleared out during the transplant, and even if the virus remained, most strains couldn’t replicate efficiently given the new cells with the CCR5 mutation. The Berlin patient case provides evidence that at least one of the two cure methods (sterilizing or functional), or perhaps a combination of them, is effective.
Researchers have continued to try to find more practical ways to rid patients of the latent virus in safe and targeted ways. In the past five years, they have identified multiple anti-latency drug candidates in the lab. Many have already begun clinical trials. Each time, people grow optimistic that a cure will be found. But so far, the results have been disappointing. None of the drugs have been able to significantly lower levels of latent virus.
In the meantime, doctors in Boston have attempted to tease out which of the two cure methods was at work in the Berlin patient. They conducted bone marrow transplants on two HIV-infected men with cancer—but this time, since HIV-resistant donor cells were not available, they just used typical cells. Both patients continued their drug cocktails during and after the transplant in the hopes that the new cells would remain HIV-free. After the transplants, no HIV was detectable, but the real test came when these patients volunteered to stop their drug regimens. When they remained HIV-free a few months later, the results were presented at the International AIDS Society meeting in July 2013. News outlets around the world declared that two more individuals had been cured of HIV.
Latent virus had likely escaped the detection methods available.
It quickly became clear that everyone had spoken too soon. Six months later, researchers reported that the virus had suddenly and rapidly returned in both individuals. Latent virus had likely escaped the detection methods available—which are not sensitive enough—and persisted at low, but significant levels. Disappointment was widespread. The findings showed that even very small amounts of latent virus could restart an infection. It also meant meant that the anti-latency drugs in development would need to be extremely potent to give any hope of a cure.
But there was one more hope—the “Mississippi baby.” A baby was born to an HIV-infected mother who had not received any routine prenatal testing or treatment. Tests revealed high levels of HIV in the baby’s blood, so doctors immediately started the infant on a drug cocktail, to be continued for life.
The mother and child soon lost touch with their health care providers. When they were relocated a few years later, doctors learned that the mother had stopped giving drugs to the child several months prior. The doctors administered all possible tests to look for signs of the virus, both latent and active, but they didn’t find any evidence. They chose not to re-administer drugs, and a year later, when the virus was still nowhere to be found, they presented the findings to the public. It was once again heralded as a cure.
Again, it was not to be. Just last month, the child’s doctors announced that the virus had sprung back unexpectedly. It seemed that even starting drugs as soon as infection was detected in the newborn could not prevent the infection from returning over two years later.
Hope Remains
Despite our grim track record with the disease, HIV is probably not incurable. Although we don’t have a cure yet, we’ve learned many lessons along the way. Most importantly, we should be extremely careful about using the word “cure,” because for now, we’ll never know if a person is cured until they’re not cured.
Clearing out latent virus may still be a feasible approach to a cure, but the purge will have to be extremely thorough. We need drugs that can carefully reactivate or remove latent HIV, leaving minimal surviving virus while avoiding the problems that befell earlier tests that reactivated the entire immune system. Scientists have proposed multiple, cutting-edge techniques to engineer “smart” drugs for this purpose, but we don’t yet know how to deliver this type of treatment safely or effectively.
As a result, most investigations focus on traditional types of drugs. Researchers have developed ways to rapidly scan huge repositories of existing medicines for their ability to target latent HIV. These methods have already identified compounds that were previously used to treat alcoholism, cancer, and epilepsy, and researchers are repurposing them to be tested in HIV-infected patients.
The less latent virus that remains, the less chance there is that the virus will win the game of chance.
Mathematicians are also helping HIV researchers evaluate new treatments. My colleagues and I use math to take data collected from just a few individuals and fill in the gaps. One question we’re focusing on is exactly how much latent virus must be removed to cure a patient, or at least to let them stop their drug cocktails for a few years. Each cell harboring latent virus is a potential spark that could restart the infection. But we don’t know when the virus will reactivate. Even once a single latent virus awakens, there are still many barriers it must overcome to restart a full-blown infection. The less latent virus that remains, the less chance there is that the virus will win this game of chance. Math allows us to work out these odds very precisely.
Our calculations show that “apparent cures”—where patients with latent virus levels low enough to escape detection for months or years without treatment—are not a medical anomaly. In fact, math tells us that they are an expected result of these chance dynamics. It can also help researchers determine how good an anti-latency drug should be before it’s worth testing in a clinical trial.
Many researchers are working to augment the body’s ability to control the infection, providing a functional cure rather than a sterilizing one. Studies are underway to render anyone’s immune cells resistant to HIV, mimicking the CCR5 mutation that gives some people natural resistance. Vaccines that could be given after infection, to boost the immune response or protect the body from the virus’s ill effects, are also in development.
Receive emails about upcoming NOVA programs and related content, as well as featured reporting about current events through a science lens.
Email Address
Zip Code
In the meantime, treating all HIV-infected individuals—which has the added benefit of preventing new transmissions—remains the best way to control the epidemic and reduce mortality. But the promise of “universal treatment” has also not materialized. Currently, even in the U.S., only 25% of HIV-positive people have their viral levels adequately suppressed by treatment. Worldwide, for every two individuals starting treatment, three are newly infected. While there’s no doubt that we’ve made tremendous progress in fighting the virus, we have a long way to go before the word “cure” is not taboo when it comes to HIV/AIDS. | But when the time came for the first patients to stop their drugs, the virus again seemed to outwit modern medicine. Within a few weeks of the last pill, virus levels in patients’ blood sprang up to pre-treatment levels—and stayed there.
Related
In the three decades since, over 25 more highly-potent drugs have been developed and FDA-approved to treat HIV. When two to five of them are combined into a drug cocktail, the mixture can shut down the virus’s replication, prevent the onset of AIDS, and return life expectancy to a normal level. However, patients must continue taking these treatments for their entire lives. Though better than the alternative, drug regimens are still inconvenient and expensive, especially for patients living in the developing world.
Given modern medicine’s success in curing other diseases, what makes HIV different? By definition, an infection is cured if treatment can be stopped without the risk of it resurfacing. When you take a week-long course of antibiotics for strep throat, for example, you can rest assured that the infection is on track to be cleared out of your body. But not with HIV.
A Bad Memory
The secret to why HIV is so hard to cure lies in a quirk of the type of cell it infects. Our immune system is designed to store information about infections we have had in the past; this property is called “immunologic memory.” That’s why you’re unlikely to be infected with chickenpox a second time or catch a disease you were vaccinated against. When an infection grows in the body, the white blood cells that are best able to fight it multiply repeatedly, perfecting their infection-fighting properties with each new generation. After the infection is cleared, most of these cells will die off, since they are no longer needed. However, to speed the counter-attack if the same infection returns, some white blood cells will transition to a hibernation state. They don’t do much in this state but can live for an extremely long time, thereby storing the “memory” of past infections. | no |
Virology | Can the human immunodeficiency virus (HIV) be cured? | no_statement | "hiv" cannot be "cured".. there is no "cure" for "hiv". | https://www.healthline.com/health/stds-that-cannot-be-cured | STDs That Cannot Be Cured | Sexually Transmitted Diseases: Curable and Incurable
Overview
Sexually transmitted diseases (STDs) are contracted from person to person through vaginal, anal, or oral sex. STDs are extremely common. In fact, 20 million new cases are reported in the United States each year, with 50 percent of these cases generally affecting people between the ages of 15 and 24.
The good news is that most STDs are curable and even those without a cure can be effectively managed or minimized with treatment.
Hepatitis B
Most cases of hepatitis B don’t cause symptoms and most adults can fight the infection on their own. If you have hepatitis B, your best option is to speak to your doctor about checking your liver and your medication options to lessen symptoms. Immune system modulators and antiviral medications can help slow the virus’s damage to your liver.
Herpes
Herpes is one of two chronic viral STDs. Herpes is very common — over 500 million people are estimated to have herpes worldwide.
Herpes is spread through skin-to-skin contact. Many people with herpes may not know they have it because they show no symptoms. However, when there are symptoms, they come in the form of painful sores around the genitals or anus.
Luckily, herpes is very treatable with antiviral medications that reduce outbreaks and the risk for transmission. If you have herpes and are showing symptoms, talk with your doctor about the right antiviral medications for you.
HIV
HIV is the other chronic viral STD. Thanks to modern medicine, many people with HIV can live long, healthy lives with practically no risk of infecting others through sex.
The main treatment for HIV is called antiretroviral therapy. These drugs reduce the amount of HIV in the blood to undetectable levels.
HPV
Human papillomavirus is extremely common. About 9 out of 10 sexually active people will contract HPV. About 90 percent of these infections go away within two years of detection. However, HPV is still incurable and, in some cases, it can lead to:
Many children are vaccinated to protect against different forms of HPV. Pap smears for women check for HPV once every few years. Genital warts can be removed with creams, liquid nitrogen, acid, or minor surgery.
Contracting an STD, even an incurable one, can be manageable. Many are treatable, even curable, through antibiotics or antiviral medications, and some STDs clear up on their own.
With most STDs, you may not show any signs or symptoms. For this reason, it’s very important to get tested for STDs on a regular basis for your own safety, the safety of your partner(s), and general public health.
The best treatment for STDs will always be prevention. If you have an STD or think you might have one, speak with your doctor to discuss your options.
Last medically reviewed on July 26, 2018
How we reviewed this article:
Healthline has strict sourcing guidelines and relies on peer-reviewed studies, academic research institutions, and medical associations. We avoid using tertiary references. You can learn more about how we ensure our content is accurate and current by reading our editorial policy. | Sexually Transmitted Diseases: Curable and Incurable
Overview
Sexually transmitted diseases (STDs) are contracted from person to person through vaginal, anal, or oral sex. STDs are extremely common. In fact, 20 million new cases are reported in the United States each year, with 50 percent of these cases generally affecting people between the ages of 15 and 24.
The good news is that most STDs are curable and even those without a cure can be effectively managed or minimized with treatment.
Hepatitis B
Most cases of hepatitis B don’t cause symptoms and most adults can fight the infection on their own. If you have hepatitis B, your best option is to speak to your doctor about checking your liver and your medication options to lessen symptoms. Immune system modulators and antiviral medications can help slow the virus’s damage to your liver.
Herpes
Herpes is one of two chronic viral STDs. Herpes is very common — over 500 million people are estimated to have herpes worldwide.
Herpes is spread through skin-to-skin contact. Many people with herpes may not know they have it because they show no symptoms. However, when there are symptoms, they come in the form of painful sores around the genitals or anus.
Luckily, herpes is very treatable with antiviral medications that reduce outbreaks and the risk for transmission. If you have herpes and are showing symptoms, talk with your doctor about the right antiviral medications for you.
HIV
HIV is the other chronic viral STD. Thanks to modern medicine, many people with HIV can live long, healthy lives with practically no risk of infecting others through sex.
The main treatment for HIV is called antiretroviral therapy. These drugs reduce the amount of HIV in the blood to undetectable levels.
HPV
Human papillomavirus is extremely common. About 9 out of 10 sexually active people will contract HPV. | no |
Virology | Can the human immunodeficiency virus (HIV) be cured? | no_statement | "hiv" cannot be "cured".. there is no "cure" for "hiv". | https://www.nhsinform.scot/illnesses-and-conditions/immune-system/hiv | HIV symptoms & treatments - Illnesses & conditions | NHS inform | HIV
About HIV
HIV is a long term health condition which is now very easy to manage. HIV stands for human immunodeficiency virus. The virus targets the immune system and if untreated, weakens your ability to fight infections and disease.
Nowadays, HIV treatment can stop the virus spreading and if used early enough, can reverse damage to the immune system.
HIV is most commonly transmitted through having unprotected sex with someone with HIV who isn't taking HIV treatment. Unprotected sex means having sex without taking HIV PrEP or using condoms.
HIV can also be transmitted by:
sharing infected needles and other injecting equipment
an HIV-positive mother to her child during pregnancy, birth and breastfeeding
All pregnant women are offered an HIV test and if the virus is found, they can be offered treatment which virtually eliminates risk to their child during pregnancy and birth.
People who take HIV treatment and whose virus level is undetectable can't pass HIV on to others. Although there is no cure for HIV yet, people living with HIV who take their treatment should have normal lifespans and live in good health.
Without treatment, people with HIV will eventually become unwell. HIV can be fatal if it's not detected and treated in time to allow the immune system to repair. It's extremely important to test for HIV if you think you've been exposed.
How do you get HIV?
HIV is found in body fluids of a person with the virus, whose levels of virus are detectable.
The body fluids most likely to contain enough virus to pass on HIV to another person are:
semen (including pre-cum)
vaginal fluid
anal mucus
blood
breast milk
HIV is a fragile virus and does not survive outside the body for long.
HIV is most commonly passed on through unprotected anal or vaginal sex. There is a very low risk of getting HIV through oral sex and there can be a small risk through sharing sex toys, which can be eliminated by using fresh condoms for each person using the toy.
How do I know if I have HIV?
Seek healthcare advice as soon as possible if you think you might have been exposed to HIV.
The only way to find out if you have HIV is to have an HIV test. This involves testing a sample of your blood or occasionally saliva for signs of the infection. In NHS services this usually involves a blood test with results available within a few days.
Some services, including HIV or sexual health charities, may provide saliva tests. Saliva tests that indicate a person may have HIV will need to be confirmed through a blood test.
It's important to be aware that:
HIV tests may need to be repeated four weeks after potential exposure to HIV, this is known as the "window period", but you shouldn't wait this long to seek help
you can get tested in a number of places, including your GP surgery, sexual health clinics and clinics run by charities
clinic tests can sometimes give you a result in minutes, although it may take a few days to get the result of a more detailed blood test
home-testing or home-sampling kits are available to buy or order online or from pharmacies – depending on the type of test you use, your result will be available in a few minutes or a few days
If the test shows you have HIV, you'll be referred to a specialist HIV clinic for some more tests and a discussion about your treatment options.
Treating and living with HIV
Treatments for HIV are now very effective, enabling people with HIV to live long and healthy lives.
Medication, known as antiretrovirals, work by stopping the virus replicating in the body, allowing the immune system to repair itself and preventing further damage. These medicines usually come in the form of tablets, which need to be taken every day.
HIV is able to develop resistance to a single HIV drug very easily, but taking a combination of different drugs makes this much less likely. Most people with HIV take a combination of 3 antiretrovirals (although some people take 1 or 2) and it's vital that the medications are taken every day as recommended by your doctor.
Taking a number of different drugs doesn’t always mean taking many tablets though as some drugs are combined together into one tablet.
For people living with HIV, taking effective antiretroviral therapy (where the HIV virus is "undetectable" in blood tests) will prevent you passing on HIV to sexual partners.
It's extremely rare for a pregnant woman living with HIV to transmit it to their babies, provided they receive timely and effective antiretroviral therapy (ART) and medical care. An HIV test is routinely offered to all women in Scotland as part of antenatal screening.
Preventing HIV
Someone living with HIV who takes their HIV treatment and who has had an undetectable level of virus for six months, cannot transmit HIV to anyone else. Over 90% of all people diagnosed with HIV in Scotland have undetectable virus. It's therefore extremely rare for someone to get HIV from a person that knows they have the virus.
HIV Pre Exposure Prophylaxis (PrEP)
PrEP is a form of HIV medication taken by someone who does not have HIV which will help to prevent them from getting HIV. In Scotland PrEP is available on the NHS through sexual health clinics for people who are at risk of getting HIV. PrEP only provides protection from HIV and not from any other sexually transmitted infections.
Condoms (and lubricant)
Properly used condoms (and lubricant for anal sex) are effective at preventing transmission of HIV as well as other sexually transmitted infections and pregnancy.
HIV Post Exposure Prophylaxis (PEP)
HIV Post exposure prophylaxis (PEP) is a form of emergency HIV medication taken by someone who does not have HIV but who has or may have been very recently exposed to HIV.
PEP should be taken as soon as possible, but it can be taken up to 72 hours after exposure. The earlier it is taken the more effective it is.
Clean Injecting Equipment
Using fresh injecting equipment, including any needles, syringes, swabs and spoons and avoiding sharing will eliminate any risk of HIV.
How common is HIV?
At the end of December 2019, there were 326 reports of HIV diagnoses in Scotland. Of these, 167 were first-ever HIV diagnoses and 158 had been previously diagnosed outwith Scotland, but were newly reported in Scotland during 2019. It's estimated that there are 6,122 individuals living with HIV in Scotland and of these 92% have been diagnosed. A total of 5,074 are attending specialist HIV treatment and care services. Of these, 98% are receiving antiretroviral therapy with 95% achieving an undetectable viral load.
The three groups with highest rates of HIV are:
gay and bisexual men or other men who have sex with men
people from countries with high HIV prevalence, especially sub Saharan African countries
people who share injecting equipment (including needles, syringes, spoons and swabs) or who have sex with people who inject drugs
The World Health Organisation estimates that around 36.9 million people in the world are living with HIV.
Symptoms of HIV
People who are infected with HIV, often experience a short flu like illness that occurs 2 to 6 weeks after infection. This is known as primary HIV infection.
The most common symptoms are:
fever (raised temperature)
sore throat
body rash
Other symptoms can include:
tiredness
joint pain
muscle pain
swollen glands (nodes)
However, these symptoms are most commonly caused by conditions other than HIV, and do not mean you have the virus.
If you have several of these symptoms, and you think you have been at risk of HIV infection within the past few weeks, you should get an HIV test.
After the initial symptoms disappear, HIV may often not cause any further symptoms for many years. During this time, HIV continues to be active and causes progressive damage to your immune system.
Once the immune system becomes severely damaged symptoms can include:
weight loss
chronic diarrhoea
night sweats
skin problems
recurrent infections
serious life-threatening illnesses
Earlier diagnosis and treatment of HIV can prevent these problems occurring and reverse them.
Causes of HIV
Routes of HIV transmission
In Scotland, HIV is most commonly transmitted by having sex with someone who has HIV without using any form of protection, such as HIV PrEP or condoms.
A person with HIV can only pass the virus to others if they have a detectable level of virus. People living with HIV who are taking treatment and have undetectable levels of virus in their bodies can't transmit HIV to others.
Over 90% of people living with HIV in Scotland have undetectable levels of virus.
The main routes of transmission are unprotected receptive or insertive vaginal and anal sex. The risk of transmitting HIV through oral sex is extremely low.
Other ways of getting HIV include:
sharing needles, syringes and other injecting equipment
from mother to baby before or during birth when the mother isn't taking HIV medication
from mother to baby by breastfeeding when the mother isn't taking HIV medication
sharing sex toys with someone infected with HIV and who isn't taking HIV medication (or by not using a fresh condom on sex toys for each person using it)
blood transfusion (outside of the UK)
How is HIV transmitted?
HIV is not passed on easily from one person to another. The virus does not spread through the air like cold and flu viruses.
HIV lives in the blood and in some body fluids. To get HIV, one of these fluids from someone with HIV (who has detectable levels of virus in their body) has to get into your blood.
The body fluids that contain enough HIV to infect someone are:
semen (including precum)
vaginal fluids, including menstrual blood
breast milk
blood
lining inside the anus
Other body fluids like saliva, sweat or urine do not contain enough of the virus to infect another person.
The main ways the virus enters the bloodstream are:
by injecting into the bloodstream with a contaminated needle or injecting equipment
through the thin lining on or inside the anus and genitals
via cuts and sores in the skin
HIV is not passed on through:
kissing
spitting
being bitten
contact with unbroken, healthy skin
being sneezed on
sharing baths, towels or cutlery
using the same toilets or swimming pools
mouth-to-mouth resuscitation
contact with animals or insects such as mosquitoes
Who is most at risk?
Having unprotected sex increases the risk of being infected with HIV. Unprotected sex means having sex where you are not taking HIV PrEP or using condoms. People who are at higher risk of becoming infected with HIV include people who are not taking PrEP medication and who are:
men who have had unprotected anal sex with men
women who have had unprotected sex with men who have sex with men
people who have had unprotected sex with a person who has lived or travelled in a high HIV prevalence country
people who inject drugs
people who have had unprotected sex with somebody who has injected drugs
people who have caught another sexually transmitted infection
people who have received a blood transfusion while in Africa, eastern Europe, the countries of the former Soviet Union, Asia or central and southern America
Diagnosing HIV
The only way to find out if you have HIV is to have an HIV test, as symptoms of HIV may not appear for many years.
HIV testing is provided to anyone free of charge on the NHS. Many clinics can give you the result on the same next day and home-testing and home-sampling kits are also available. Home-testing and home sampling kits are also available from some services and charities or to buy online.
Who should get tested for HIV?
Anyone who thinks they could have HIV should get tested.
Certain groups of people are at particularly high risk and are advised to have regular tests. For example:
gay and bisexual men or men who have sex with men are advised to have an HIV test at least once a year, or every 3 months, if having sex without HIV PrEP or condoms with new or casual partners
women and men from countries with high HIV prevalence, especially from sub Saharan Africa are advised to have an HIV test, if having sex without using HIV PrEP or condoms with new or casual partners
people who inject drugs or who have sex without using HIV PrEP and condoms with people who inject drugs
An HIV test is one of the range of tests routinely offered to all women in Scotland as part of antenatal screening. There are also home-sampling and home-testing kits (see below) you can use if you don't want to visit any of these places.
Types of HIV tests
There are 4 main types of HIV test:
full blood test – where a sample of blood is taken in a clinic and sent for testing in a laboratory. Results are usually available within a few days.
"point of care" test – where a sample of saliva from your mouth or a small spot of blood from your finger is taken in a clinic. This sample doesn't need to be sent to a laboratory and the result is available within a few minutes.
home-sampling kit – where you collect a saliva sample or small spot of blood at home and send it off in the post for testing. You'll be contacted by phone or text with your result in a few days. You can buy them online or from some pharmacies.
home-testing kit – where you collect a saliva sample or small spot of blood yourself and test it at home. The result is available within minutes. It's important to check that any test you buy has a CE quality assurance mark and is licensed for sale in the UK, as poor quality HIV self-tests are available from overseas.
If the test finds no sign of infection, your result is "negative". If signs of infection are found, the result is "positive".
The full blood test is the most accurate test and can normally give reliable results from four weeks after infection. The other tests whilst also accurate, may not give a reliable result for a longer period after exposure to the infection (this is known as the "window period").
For all these tests, a full blood test should be carried out to confirm the result if the first test is positive. If this test is also positive, you'll be referred to a specialist HIV clinic for some more tests and a discussion about your treatment options.
Online appointment booking
You may be able to book an appointment for an HIV test online using the online booking system. This varies for different NHS board areas.
Treating HIV
Although HIV cannot be cured, it's a very manageable long term condition and effective treatment is available to enable individuals to live a long and healthy life.
If you're diagnosed with HIV, you'll be referred to a specialist HIV clinic for treatment, regular monitoring and care.
It's recommended that everyone diagnosed with HIV starts treatment shortly after being diagnosed to keep in good health and free of symptoms. Treatment for HIV is generally very well tolerated.
Medication, known as antiretrovirals, work by stopping the virus replicating in the body, allowing the immune system to repair itself and preventing further damage. These medicines come in the form of tablets which need to be taken every day.
HIV can develop resistance to a single HIV drug very easily, but by taking a combination of different drugs or with support from your doctor in taking your treatment, resistance is less likely. Most people with HIV take a combination of three antiretrovirals (although some people take 1 or 2) and it's vital that the medications are taken every day as recommended by your doctor.
For people living with HIV, taking effective antiretroviral therapy (where the HIV virus is "undetectable" in blood tests) will prevent you passing on HIV to sexual partners.
HIV Pre Exposure Prophylaxis (PrEP)
HIV Post Exposure Prophylaxis (PEP)
If you think you may have been exposed to HIV and you haven't taken PrEP medication or used a condom, you should take PEP medication.
Post exposure prophylaxis (PEP) is a form of emergency HIV medication taken by someone who does not have HIV but who has or may have been very recently exposed to HIV.
PEP should be taken as soon as possible, but it can be taken up to 72 hours after exposure. The earlier it is taken the more effective it is.
PEP is available from sexual health services or out of hours from A&E.
Condoms
Condoms come in a variety of shapes, colours, textures, materials and flavours. Both male and female condoms are available.
A condom is the most effective form of protection against HIV and other STIs. It can be used for vaginal and anal sex, and for oral sex performed on men.
HIV can be passed on before ejaculation, through pre-cum and vaginal secretions, and from the anus.
It is very important that condoms are put on before any sexual contact occurs between the penis, vagina, mouth or anus.
You can get free condoms in most areas of Scotland, check your local sexual health service website for details.
Lubricant
Lubricant, or lube, is often used to enhance sexual pleasure and safety, by adding moisture to either the vagina or anus during sex.
Lubricant can make sex safer by reducing the risk of anal or vaginal tears caused by dryness or friction, and it can also prevent a condom from tearing. Lubricant for vaginal sex is only recommended for women that have low vaginal moisture.
Only water-based lubricant (such as K-Y Jelly) rather than an oil-based lubricant (such as Vaseline or massage and baby oil) should be used with condoms.
Oil-based lubricants weaken the latex in condoms and can cause them to break or tear.
Sharing needles and injecting equipment
If you inject drugs, you shouldn't share needles, syringes or other injecting equipment such as spoons and swabs as this could expose you to HIV and other viruses found in the blood, such as hepatitis C.
Many local authorities and pharmacies offer needle exchange programmes, where used needles can be exchanged for clean ones.
A GP or drug counsellor should be able to advise you about free injecting equipment provision including needles.
If you are having a tattoo or piercing, it's important that a clean, sterilised needle is always used.
Living with HIV
HIV is a long term condition which is easy to manage and treat. People living with HIV who are on treatment will live a near normal lifespan in very good health.
Adjusting to living with HIV can take a while for some people. Your HIV clinic can provide support for you in managing your condition and in adjusting to living with the condition. They will also be able to signpost you to support services provided by HIV support organisations.
Practical issues you might require support with include psychological support, telling people about your HIV, sex and relationships, pregnancy and financial support.
Your HIV medication
Your HIV clinic will provide you with advice and support to help you take your HIV medicine and stay well and healthy.It’s best to tell your HIV doctor or HIV pharmacy about all other drugs –including over-the-counter medications, supplements, and recreational drugs – you are taking to check they won’t interact with your HIV medication.
Your health
In addition to taking HIV medication, there are many things you can do to improve your general health and reduce your risk of falling ill.
These include:
regular exercise
healthy eating
stopping smoking
reducing the amount of alcohol you drink
Reviewing your treatment
Because HIV is a long-term condition, you will be in regular contact with your healthcare team, who will review your treatment on an ongoing basis.
A good relationship with the team means that you can easily discuss your symptoms or concerns. The more the team knows, the more they can help you.
Services, including support organisations, may work together to provide specialist care and emotional support.
Preventing infection
Everyone with a long-term condition such as HIV is encouraged to get a flu vaccination each autumn to protect against seasonal flu (influenza).
It is also recommended that they get a pneumoccocal vaccination. This is an injection that protects against a serious chest infection called pneumococcal pneumonia.
Pregnancy and HIV
If you're planning a pregnancy and you or your partners viral load is undetectable, your clinic can support you to time unprotected sex to increase your chance of pregnancy. Sperm washing is no longer required to successfully prevent passing on HIV to your child. HIV treatment is available to prevent a pregnant woman from passing HIV to her child.
Without treatment, there is a one in four chance your baby will become infected with HIV. With treatment, the risk is less than one in 100.
Advances in treatment mean there is no increased risk of passing the virus to your baby with a normal delivery. However, for some women, a caesarean section may still be recommended.
It is safest to feed your baby with formula milk. Free formula milk is usually available through your HIV clinic. They can also provide advice on breastfeeding. | HIV
About HIV
HIV is a long term health condition which is now very easy to manage. HIV stands for human immunodeficiency virus. The virus targets the immune system and if untreated, weakens your ability to fight infections and disease.
Nowadays, HIV treatment can stop the virus spreading and if used early enough, can reverse damage to the immune system.
HIV is most commonly transmitted through having unprotected sex with someone with HIV who isn't taking HIV treatment. Unprotected sex means having sex without taking HIV PrEP or using condoms.
HIV can also be transmitted by:
sharing infected needles and other injecting equipment
an HIV-positive mother to her child during pregnancy, birth and breastfeeding
All pregnant women are offered an HIV test and if the virus is found, they can be offered treatment which virtually eliminates risk to their child during pregnancy and birth.
People who take HIV treatment and whose virus level is undetectable can't pass HIV on to others. Although there is no cure for HIV yet, people living with HIV who take their treatment should have normal lifespans and live in good health.
Without treatment, people with HIV will eventually become unwell. HIV can be fatal if it's not detected and treated in time to allow the immune system to repair. It's extremely important to test for HIV if you think you've been exposed.
How do you get HIV?
HIV is found in body fluids of a person with the virus, whose levels of virus are detectable.
The body fluids most likely to contain enough virus to pass on HIV to another person are:
semen (including pre-cum)
vaginal fluid
anal mucus
blood
breast milk
HIV is a fragile virus and does not survive outside the body for long.
HIV is most commonly passed on through unprotected anal or vaginal sex. There is a very low risk of getting HIV through oral sex and there can be a small risk through sharing sex toys, which can be eliminated by using fresh condoms for each person using the toy.
| no |
Ergonomics | Can using a standing desk help you lose weight? | yes_statement | using a "standing" "desk" can "help" you "lose" "weight".. "standing" while working at a "desk" can aid in "weight" loss. | https://www.health.harvard.edu/blog/the-truth-behind-standing-desks-2016092310264 | The truth behind standing desks - Harvard Health | The truth behind standing desks
ARCHIVED CONTENT: As a service to our readers, Harvard Health Publishing provides access to our library of archived content. Please note the date each article was posted or last reviewed. No content on this site, regardless of date, should ever be used as a substitute for direct medical advice from your doctor or other qualified clinician.
Are you reading this while standing at your desk? There's a good chance that you are — standing desks are all the rage.
These desks allow you to work at your "desk job" while standing rather than sitting in a chair. They can be custom built (for thousands of dollars) or you can convert a regular desk into a standing desk at no cost by elevating your computer — one of my colleagues simply placed his computer on a stack of books. Sales of standing desks have soared in recent years; in many cases their sales have far outpaced those of conventional desks.
Personally, I love the idea — rather than sitting all day staring at a computer screen, surely it would be better to be standing (while staring at a computer screen). But, I also love the idea of studying some of the assumptions surrounding standing desks. A common one is this: certainly it takes more effort — and extra calories — to remain upright rather than sit, and over a course of days or weeks those extra calories would add up to something significant. But is it true that a standing desk can help you avoid weight gain or even lose excess weight?
That's just what researchers publishing in the Journal of Physical Activity and Health tried to answer. (Yes, there is such a journal.) They fitted 74 healthy people with masks that measured oxygen consumption as a reflection of how many calories they burned while doing computer work, watching TV, standing, or walking on a treadmill. Here's what they found:
While sitting, study subjects burned 80 calories/hour — about the same as typing or watching TV
While standing, the number of calories burned was only slightly higher than while sitting — about 88 calories/hour
Walking burned 210 calories/hour.
In other words, use of a standing desk for three hours burns an extra 24 calories, about the same number of calories in a carrot. But walking for just a half hour during your lunch break could burn an extra 100 calories each day.
Prior reports of the calories burned by standing versus sitting suggested a much higher calorie burn rate for standing, but this new study actually measured energy expenditure and likely represents a more accurate assessment.
Reasons to stand by your standing desk
While the new study suggests that a standing desk is unlikely to help with weight loss or avoiding weight gain, there may be other reasons to stand while you work. Advocates of standing desks point to studies showing that after a meal, blood sugar levels return to normal faster on days a person spends more time standing. And standing, rather than sitting, may reduce the risk of shoulder and back pain.
Other potential health benefits of a standing desk are assumed based on the finding that long hours of sitting are linked with a higher risk of
obesity
diabetes
cardiovascular disease
cancer (especially cancers of the colon or breast)
premature death.
But "not sitting" can mean many different things — walking, pacing, or just standing — and as the new study on energy expenditure shows, the health effects of these may not be the same. For most of these potential benefits, rigorous studies of standing desks have not yet been performed. So, the real health impact of a standing desk is not certain.
If you're going to stand at your desk…
Keep in mind that using a standing desk is like any other "intervention" — it can come with "side effects." For example, if you suddenly go from sitting all day to standing all day, you run the risk of developing back, leg, or foot pain; it's better to ease into it by starting with 30 to 60 minutes a day and gradually increasing it. Setting a timer to remind you when to stand or sit (as many experts recommend) can disrupt your concentration, reduce your focus, and reduce your efficiency or creativity. You may want to experiment with different time intervals to find the one that works best for you.
It's also true that certain tasks — especially those requiring fine motor skills — are more accurately performed while seated. So, a standing desk may not be a good answer for everyone who sits a lot at work.
What's next?
We have seen dramatic changes in the work environment in recent years. These include open floor plans and inflatable exercise balls instead of chairs, as well as standing desks. I have colleagues who have installed a "treadmill desk" that allows them to work on a computer or video conference while walking on a treadmill. There are advantages, and perhaps some risk, that come with each of these changes. But, before we accept them as better — or healthy — we should withhold judgment until we have the benefit of more experience and, ideally, well-designed research.
About the Author
Dr. Robert H. Shmerling is the former clinical chief of the division of rheumatology at Beth Israel Deaconess Medical Center (BIDMC), and is a current member of the corresponding faculty in medicine at Harvard Medical School. …
See Full Bio
Disclaimer:
As a service to our readers, Harvard Health Publishing provides access to our library of archived content. Please note the date of last review or update on all articles.
No content on this site, regardless of date, should ever be used as a substitute for direct medical advice from your doctor or other qualified clinician.
You might also be interested in…
Living Better, Living Longer
With this Special Health Report, Living Better, Living Longer, you will learn the protective steps doctors recommend for keeping your mind and body fit for an active and rewarding life. You’ll get tips for diet and exercise, preventive screenings, reducing the risk of coronary disease, strengthening bones, lessening joint aches, and assuring that your sight, hearing, and memory all stay sharp. Plus, you’ll get authoritative guidance to help you stretch your health care dollar, select a health plan that meets your needs, prepare a health care proxy, and more.
The Best Diets for Cognitive Fitness, is yours absolutely FREE when you sign up to receive Health Alerts from Harvard Medical School
Sign up to get tips for living a healthy lifestyle, with ways to fight inflammation and improve cognitive health, plus the latest advances in preventative medicine, diet and exercise, pain relief, blood pressure and cholesterol management, and more.
Health Alerts from Harvard Medical School
Get helpful tips and guidance for everything from fighting inflammation to finding the best diets for weight loss...from exercises to build a stronger core to advice on treating cataracts. PLUS, the latest news on medical advances and breakthroughs from Harvard Medical School experts.
BONUS! Sign up now and get a FREE copy of the Best Diets for Cognitive Fitness | A common one is this: certainly it takes more effort — and extra calories — to remain upright rather than sit, and over a course of days or weeks those extra calories would add up to something significant. But is it true that a standing desk can help you avoid weight gain or even lose excess weight?
That's just what researchers publishing in the Journal of Physical Activity and Health tried to answer. (Yes, there is such a journal.) They fitted 74 healthy people with masks that measured oxygen consumption as a reflection of how many calories they burned while doing computer work, watching TV, standing, or walking on a treadmill. Here's what they found:
While sitting, study subjects burned 80 calories/hour — about the same as typing or watching TV
While standing, the number of calories burned was only slightly higher than while sitting — about 88 calories/hour
Walking burned 210 calories/hour.
In other words, use of a standing desk for three hours burns an extra 24 calories, about the same number of calories in a carrot. But walking for just a half hour during your lunch break could burn an extra 100 calories each day.
Prior reports of the calories burned by standing versus sitting suggested a much higher calorie burn rate for standing, but this new study actually measured energy expenditure and likely represents a more accurate assessment.
Reasons to stand by your standing desk
While the new study suggests that a standing desk is unlikely to help with weight loss or avoiding weight gain, there may be other reasons to stand while you work. Advocates of standing desks point to studies showing that after a meal, blood sugar levels return to normal faster on days a person spends more time standing. And standing, rather than sitting, may reduce the risk of shoulder and back pain.
Other potential health benefits of a standing desk are assumed based on the finding that long hours of sitting are linked with a higher risk of
obesity
diabetes
cardiovascular disease
cancer (especially cancers of the colon or breast)
premature death.
| no |
Ergonomics | Can using a standing desk help you lose weight? | yes_statement | using a "standing" "desk" can "help" you "lose" "weight".. "standing" while working at a "desk" can aid in "weight" loss. | https://www.scottsdaleweightloss.com/five-reasons-you-should-switch-to-a-standing-desk/ | Five Reasons You Should Switch to a Standing Desk - Scottsdale ... | Five Reasons You Should Switch to a Standing Desk
Sitting for extended periods of time is one of the worst habits you can have for your health. Research has repeatedly shown that prolonged sitting increases your risk of heart disease, diabetes, and stroke.
Breaking up your sitting into smaller chunks of less than half an hour can have tons of benefits for your health. However, sometimes sitting is unavoidable like when you’re on long flights or commuting to work. If you work at a desk, it’s easy to spend more than ten hours a day sitting at home and in the office. By switching to a standing desk, you can drastically reduce your number of seated hours.
Standing Desks Lower Your Risk of Obesity. When you’re sitting at your desk, your body is essentially idling and burning the minimum amount of calories it needs for survival. However, when you’re standing, your body has to activate muscles in your legs and core which increases the number of calories you burn. Even if you only burn off a few extra hundred calories per day over the course of a year, it quickly adds up.
Standing Desks Reduce Back Pain. Sitting with poor posture is an excellent way to put extra pressure on your spine. Your spine is like a tower of Lego blocks balanced on top of each other. When you slump your head forward or tip the tower of blocks, the tower is likely to break somewhere close to the bottom, at your lumbar vertebra. A study published by Pronk et al. found that by reducing sitting time by 224% per day, they were able to reduce the incidence of back and neck pain by 54%
Standing Desks May Increase lifespan. By switching to a standing desk, you may actually add years to your life. A study published in 2009 by Katzmarzyk et al. examined 17,000 adults for an average of twelve years and found a correlation between the amount of time sitting and risk of mortality from natural death.
Standing Desks Improve Insulin Sensitivity. When you sit for an extended period of time without breaks, your body becomes more resistant to insulin. A study by David Dunstan and his colleges found that participants who sat for five hours without a break had plasma insulin levels 20% higher than in participants who broke up sitting. Even after an hour or two of continuous sitting, there were noticeable differences in the insulin levels between the group who took breaks and the group who didn’t. A standing desk helps break up your sitting throughout the day even if you chose to sit at times.
Standing Desks May Increase Productivity. One of the main concerns with standing desks is that they may hurt productivity in the workplace. However, one study published in 2009 by Husemann et al. found that there was no increase in spelling mistakes or word processing speed when workers switched to a standing desk. There may even be potential for improvements in productivity because standing increases mental alertness. More research needs to be done to be conclusive.
Investing in a standing desk is an investment in your health. By sitting less you will not only lower your risk of heart disease, but you may also improve your body composition. When choosing a desk, finding a model that gives you the option to sit for part of the day so that you can transition into full time standing.
We're the experts you can trust to guide you through a weight loss program that will not only take the weight off but keep it off.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website. | Five Reasons You Should Switch to a Standing Desk
Sitting for extended periods of time is one of the worst habits you can have for your health. Research has repeatedly shown that prolonged sitting increases your risk of heart disease, diabetes, and stroke.
Breaking up your sitting into smaller chunks of less than half an hour can have tons of benefits for your health. However, sometimes sitting is unavoidable like when you’re on long flights or commuting to work. If you work at a desk, it’s easy to spend more than ten hours a day sitting at home and in the office. By switching to a standing desk, you can drastically reduce your number of seated hours.
Standing Desks Lower Your Risk of Obesity. When you’re sitting at your desk, your body is essentially idling and burning the minimum amount of calories it needs for survival. However, when you’re standing, your body has to activate muscles in your legs and core which increases the number of calories you burn. Even if you only burn off a few extra hundred calories per day over the course of a year, it quickly adds up.
Standing Desks Reduce Back Pain. Sitting with poor posture is an excellent way to put extra pressure on your spine. Your spine is like a tower of Lego blocks balanced on top of each other. When you slump your head forward or tip the tower of blocks, the tower is likely to break somewhere close to the bottom, at your lumbar vertebra. A study published by Pronk et al. found that by reducing sitting time by 224% per day, they were able to reduce the incidence of back and neck pain by 54%
Standing Desks May Increase lifespan. By switching to a standing desk, you may actually add years to your life. A study published in 2009 by Katzmarzyk et al. examined 17,000 adults for an average of twelve years and found a correlation between the amount of time sitting and risk of mortality from natural death.
Standing Desks Improve Insulin Sensitivity. When you sit for an extended period of time without breaks, your body becomes more resistant to insulin. | yes |
Ergonomics | Can using a standing desk help you lose weight? | yes_statement | using a "standing" "desk" can "help" you "lose" "weight".. "standing" while working at a "desk" can aid in "weight" loss. | https://www.webmd.com/fitness-exercise/standing-desks-help-beat-inactivity | Standing Desks: How They Can Help You Beat Inactivity | Standing Desks: How They Help You Beat Inactivity
In this Article
Youâve probably seen a co-worker catch up on emails at the officeâs treadmill desk, while another knocks out reports at their standing desk. But did you know theyâre lowering their risk for heart disease, obesity, and back and neck pain, too?
Studies have linked sitting a lot to these and other health problems. Even people who exercise most days face health risks if they sit too much. Standing desks raise your computer high enough for you to work and stand at the same time. This keeps you on your feet for more of the day.
Types of Standing Desks
All standing desks follow the same basic idea -- they let you work while you stand.
Fixed-height desks stay at your standing height. Sit-stand desks go up and down so you can sit or stand whenever you feel like it. Power sit-stand desks go up with the push of a button. You can lift manual ones with a handle or raise them with a lever or crank.
You can buy a standing desk online or at an office supply, electronics, or big-box store.
A basic fixed-height desk will cost you less than $100, but a really nice electric desk can cost more than $1,000. Treadmill desks take the idea a step further by letting you walk while you work, but they can cost more than $1,000, too.
The Pros
Besides less sitting time, standing at work has other benefits:
More calories burned: One study showed that standing sheds 88 calories an hour, compared to 80 calories for sitting. Walking burns a lot more -- 210 calories an hour.
Less back pain: Sitting for long periods of time tightens your muscles and can hurt your lower back, especially if you have bad posture. Standing desks seem to help ease back pain, but doctors don't know how much time you need to stand to get this benefit.
More productive: In a study of call center employees, those with standing desks were 45% more productive on a daily basis than employees who sat during their shift.
The Cons
Standing desks aren't perfect. They can cause a few problems:
Leg and foot pain: Standing for long periods of time puts pressure on your knees, hips, and feet. This could lead to pain. If you lift one foot to ease the pressure, being off-balance could affect your posture.
Vein problems: Being on your feet for too long makes blood collect in your leg veins. The vein may stretch to fit the extra blood and get weaker. This leads to varicose veins. People who stand for more than 6 hours a day are two or three times more likely to need surgery for varicose veins than people who stand or walk for less than 4 hours a day.
Standing doesn't replace exercise: You'll only burn a few more calories standing, which is better than nothing. But walking more than doubles your calorie burn. Studies that compared the two showed treadmill desk users had much greater improvements in blood sugar and cholesterol levels than standing desk users.
Standing desks aren't ideal for every task: You may be able to type or answer the phone while on your feet, but some tasks, like drawing and writing, are easier when you sit.
The Right Way to Stand
Experts say the best way to use a standing desk is to stand for a while, sit, then stand again. Do this several times throughout the day. To start, stand for just 30 minutes at a time, a few times a day. Add an hour, then add 2 or more hours as you feel comfortable.
Move the standing desk so your body is properly aligned. Your head, neck, and spine should be in a straight line when you stand. And your elbows should form a 90-degree angle when your wrists are flat on the desk. Put your computer monitor at eye level.
Wear comfortable shoes with no heel or a low one. Stand on a cushioned mat for more support.
Every 30 minutes or so, leave your desk and take a walk. Head to a co-worker's desk or grab a drink at the fountain to get some exercise and give your back a break. And even though you're standing more, don't forget to do at least 30 minutes of moderate-intensity exercise, 5 days a week.
Show Sources
SOURCES:
American Cancer Society: "Study: An hour of physical activity a day needed to offset dangers of prolonged sitting."
BMC Public Health: "Evaluation of sit-stand workstations in an office setting: a randomized controlled trial." | Standing Desks: How They Help You Beat Inactivity
In this Article
Youâve probably seen a co-worker catch up on emails at the officeâs treadmill desk, while another knocks out reports at their standing desk. But did you know theyâre lowering their risk for heart disease, obesity, and back and neck pain, too?
Studies have linked sitting a lot to these and other health problems. Even people who exercise most days face health risks if they sit too much. Standing desks raise your computer high enough for you to work and stand at the same time. This keeps you on your feet for more of the day.
Types of Standing Desks
All standing desks follow the same basic idea -- they let you work while you stand.
Fixed-height desks stay at your standing height. Sit-stand desks go up and down so you can sit or stand whenever you feel like it. Power sit-stand desks go up with the push of a button. You can lift manual ones with a handle or raise them with a lever or crank.
You can buy a standing desk online or at an office supply, electronics, or big-box store.
A basic fixed-height desk will cost you less than $100, but a really nice electric desk can cost more than $1,000. Treadmill desks take the idea a step further by letting you walk while you work, but they can cost more than $1,000, too.
The Pros
Besides less sitting time, standing at work has other benefits:
More calories burned: One study showed that standing sheds 88 calories an hour, compared to 80 calories for sitting. Walking burns a lot more -- 210 calories an hour.
Less back pain: Sitting for long periods of time tightens your muscles and can hurt your lower back, especially if you have bad posture. Standing desks seem to help ease back pain, but doctors don't know how much time you need to stand to get this benefit.
More productive: In a study of call center employees, those with standing desks were 45% more productive on a daily basis than employees who sat during their shift.
| yes |
Ergonomics | Can using a standing desk help you lose weight? | yes_statement | using a "standing" "desk" can "help" you "lose" "weight".. "standing" while working at a "desk" can aid in "weight" loss. | https://thestandingdesk.com/health-benefits-of-standing-desks/ | Even More Health Benefits of Standing Desks - The Standing Desk | Cart
Even More Health Benefits of Standing Desks
Is standing up at work healthier than sitting down?
Office jobs have become increasingly sedentary in the past few decades, contributing to an overall epidemic of sitting. Do the benefits of standing at work make standing full time a healthier option? The truth is that standing all day is really no healthier than sitting all day. The key is to find the best of both worlds: a height adjustable sit stand desk.
What are health risks or issues associated with sitting all day or too much time spent sitting?
According to the Annals of Internal Medicine, the average person sits for half of their waking hours, and all of this time sitting can lead to serious health issues. Cardiovascular disease, diabetes, osteoporosis, back pain, obesity, and mood disorders are only some of the known health risks of desk jobs. Fortunately, the benefits of stand up desks at work can help counteract some of these negative health effects.
Does standing burn more calories?
Movement causes fat-burning enzymes to stay activated, and as a result, burns more calories than simply sitting still. One study showed that, on its own, standing burned about 8 more calories per hour than sitting – a very slow march to weight loss.
Does standing at a desk help you lose weight?
So, do you burn calories at a standing desk? For sure. Do standing desks help you lose weight? That’s a little less clear, although some research indicates slight weight loss. The way that standing can help weight loss is that it increases the likelihood of moving about (shifting weight, pacing, etc.), which further increases calorie expenditure.
So how does standing at work help you lose weight? Moving helps fat-burning enzymes to stay activated, which in turn burns more calories than remaining sedentary. Even the smallest exertions have an impact on our energy expenditure throughout the day. Over weeks and years, it all adds up. If you add other movement to your workstation, such as an under desk treadmill or a desk bike, even more calories are burned.
Do standing desks help with back pain?
More than 80% of adults report some sort of back pain at least once in their life. As any back pain sufferer knows, certain postures and activities can exacerbate the problem. For office workers, sitting at their desk is often the source of their pain. In fact, too much sitting can cause herniated discs, damaged nerves, and degenerated joints associated with back pain. The Take a Stand Project in 2011 found that reducing sitting time by using a sit-stand workstation decreased upper back and neck pain by 54%.
Doesn’t standing all day hurt your back too?
Studies show that prolonged standing is associated with a number of problems, including lower back pain, fatigue, and discomfort. In fact, standing all day can be just as damaging as sitting all day, causing tightness and discomfort in the legs and back.
Is it healthier to switch between standing and sitting?
Switching between sitting and standing guarantees shifts in posture that can reduce inactivity and allow people to naturally move to the most comfortable position. Indeed, our bodies work best when we can switch positions whenever joint pain or muscle fatigue prompts us to.
How does standing affect blood pressure or blood sugar?
There is a strong connection between sedentary behavior and negative effects on blood sugar and blood pressure. In fact, a 2016 study found that sitting time correlates strongly with diabetes and other chronic disease.
Standing affects both blood pressure and blood sugar. Within 90 seconds of standing up, cellular and muscular systems are initiated for using insulin and processing blood glucose, triglycerides, and cholesterol. Our arteries dilate, and muscles are engaged to push more fuel into our cells.
Blood sugar: Standing at work, particularly after eating lunch, significantly reduces blood sugar. One reason is that sitting makes your body work harder to absorb sugar and produce insulin. The result is too much stress on cells that make insulin, which can be an important risk factor for diabetes. When researchers at the University of Leicester studied workers who sat continuously vs workers who stood every 30 minutes, they found that the workers who stood were able to lower their blood sugar 34%, as well as lowering their insulin levels.
Blood pressure: Sitting a lot reduces blood flow, allowing fatty acids to build up in blood vessels, which eventually can lead to heart disease. Standing helps muscles burn more fat and increases blood flow, so fatty acids have a harder time clogging the heart.
The Recken Desk
Now Only $799
Experience the Strong, Safe & Intelligent Recken Standing Desk
Does standing affect risks for heart disease, diabetes, or other diseases?
We know that extended sitting increases the harmful effects of blood sugar and fat metabolism, which directly affects an individual’s risk of heart disease and diabetes.
Heart Disease: Standing is one way to help reduce the risk of heart disease. An enzyme called lipoprotein lipase breaks down fat in the blood and processes it for muscles to use during activity. According to a study of exercise physiology, inactivity suppresses lipoprotein lipase, which leaves fat in the blood and contributes to increased risk of heart disease. In fact, sitting reduces lipoprotein lipase production by about 90%. To make sure that your body can use fat as designed, rather than storing it – stand up!
Diabetes: At the same time, standing can reduce the risk of diabetes. One study published in Diabetes Care found that breaking up prolonged sitting reduced glucose, insulin, and NEFA responses in women at risk of type 2 diabetes.
How does standing affect energy levels?
For an immediate energy boost – stand up! Try it and see for yourself. It may sound simple, but the act of standing increases alertness and provides a quick jolt of energy.
What is actually happening? Blood flow to the brain slows when you’re sitting, reducing the amount of oxygen your brain receives. Simply standing and moving slightly increases blood flow ever so slightly – and the result is noticeable.
Sit stand desks may help you cut down on coffee, caffeine, or other stimulants
If you’re looking to kick your caffeine habit, look no further than your standing desk. Periodic standing is a natural way to help reduce reliance on coffee and energy drinks. To prevent an afternoon energy crash, alternate sitting and standing.
Does standing at your desk affect your mood?
What do you do when you first meet someone? Stand and shake their hand. Think about it – if you want to project confidence in any situation, whether meeting a new person or speaking in public, you’ll likely be standing. The act of standing has a powerful effect on our mood and our confidence.
When we move our muscles, fresh oxygenated blood is pumped to our brains, releasing mood enhancing chemicals. It’s no wonder people report that their overall wellbeing is improved by using a standing desk. In fact, 62% of standing desk users in the Take a Stand study reported feeling happier than traditional desk users.
Do standing desks increase mental focus, or productivity?
When standing at work, people tend to spend less time on unproductive and time-wasting activities like social media. The result is an overall productivity boost for the entire day. In fact, a widely known study of call center workers found that their productivity increased 45% when they stood during calls.
Improved comfort and the ability to accommodate natural positions can help improve focus and productivity. If you’re interested in increasing your daily productivity with very little effort, it makes sense to try a height adjustable desk. Be sure to find a sit stand desk that adjusts quickly and smoothly along with your body’s natural movement to minimize distractions.
Are there other health benefits to using a standing desk or a sit/stand desk?
If you’re interested in the benefits of standing versus sitting at work, find a height adjustable desk that allows both. Try it for 30 days risk free. We’re confident you’ll find something to love. And the improvements in your physical, mental, and metabolic health may surprise you. | One study showed that, on its own, standing burned about 8 more calories per hour than sitting – a very slow march to weight loss.
Does standing at a desk help you lose weight?
So, do you burn calories at a standing desk? For sure. Do standing desks help you lose weight? That’s a little less clear, although some research indicates slight weight loss. The way that standing can help weight loss is that it increases the likelihood of moving about (shifting weight, pacing, etc.), which further increases calorie expenditure.
So how does standing at work help you lose weight? Moving helps fat-burning enzymes to stay activated, which in turn burns more calories than remaining sedentary. Even the smallest exertions have an impact on our energy expenditure throughout the day. Over weeks and years, it all adds up. If you add other movement to your workstation, such as an under desk treadmill or a desk bike, even more calories are burned.
Do standing desks help with back pain?
More than 80% of adults report some sort of back pain at least once in their life. As any back pain sufferer knows, certain postures and activities can exacerbate the problem. For office workers, sitting at their desk is often the source of their pain. In fact, too much sitting can cause herniated discs, damaged nerves, and degenerated joints associated with back pain. The Take a Stand Project in 2011 found that reducing sitting time by using a sit-stand workstation decreased upper back and neck pain by 54%.
Doesn’t standing all day hurt your back too?
Studies show that prolonged standing is associated with a number of problems, including lower back pain, fatigue, and discomfort. In fact, standing all day can be just as damaging as sitting all day, causing tightness and discomfort in the legs and back.
Is it healthier to switch between standing and sitting?
Switching between sitting and standing guarantees shifts in posture that can reduce inactivity and allow people to naturally move to the most comfortable position. | yes |
Ergonomics | Can using a standing desk help you lose weight? | yes_statement | using a "standing" "desk" can "help" you "lose" "weight".. "standing" while working at a "desk" can aid in "weight" loss. | https://www.startstanding.org/standing-desks/standing-desk-benefits/ | 9 Standing Desk Benefits that May Surprise You - Start Standing | Standing Desk Benefits
According to the Annals of Internal Medicine, the average person sits for half of their waking hours. They sit at work and then sit at home. And sitting increases the risks of many serious health issues.
But there’s a growing health trend that helps to offset the damage done by long periods of inactivity—standing desks. People are buying standing desks for their home offices and employers are now offering their employees the option to stand while they work. A survey from the Society for Human Resource Management found that standing desks are the fastest-growing benefits trend. In 2013, 13% of employers provided or subsidized standing desks, in 2017, 44%, and in 2019, 60%.
We're going to share some research on the benefits of standing desks, as well as discuss our experiences and the experiences of others during their transition to a standing desk.
The most common benefits that we'll discuss are:
weight loss
higher productivity
reduced rate of diabetes, heart disease, cancer, and early mortality
more energy
better mood
less back pain
Weight Loss
According to a study by the Physical Activity and Weight Management Research Center, standing only burns 10% more calories than sitting. Researchers attached 74 people with masks that measured oxygen consumption and followed 3 groups of participants. They found that participants who sat burned 80 calories/hour, those who stood burned 88 calories/hour, and those who walked burned 210 calories/hour.
Burning an additional 8 calories/hour isn’t significant, especially when you factor in that many people who stand at their desks don’t stand all day. If you stood at your desk for half of your workday, and your workday is 8 hours, then 4 hours of standing will burn an additional 32 calories vs. sitting for 8 hours. If you multiply 32 calories by 5 days per week, 44 weeks per year, you'll burn an additional 7,040 calories in one year (just over 2 lbs).
The journal Occupational Medicine explains in a March 2017 publication that a standing desk, “provides an opportunity to increase energy expenditure throughout the working day.” They explain further, "Though modest, accumulation of this small benefit over time could be an important part of the public health strategy to prevent weight gain in desk-bound workers."
The numbers we just discussed are consistent with the experiences of many people who start using standing desks—they often lose a little weight, but not a significant amount. People have more energy when they stand compared to when they sit, and when they have more energy, they tend to be more active. Using a standing desk is just one part of an active lifestyle.
Another factor to consider is that when people stand at their desks, they often do more than just stand. It's common for people to shift, stretch, squat, and dance when they're standing at their desks. They also tend to leave their desks more often.
For those interested in losing weight while working, we recommend using an under-desk treadmill so you can walk while you work, or an under-desk bike. People who use one of these will rotate between walking, standing, and sitting throughout the day, often sitting for tasks that require deep concentration, and walking or standing for tasks that require less focus.
Productivity
Dan Fois published an experiment in New York Magazine where he stood for 30 days straight. The only time he wasn’t standing was when he slept or used the bathroom. One of the remarkable things that happened was a significant increase in his productivity. “I’ve cut my time-wasting drastically, editing and writing more than in any month I can remember.”
This isn’t an uncommon experience. Many people note that when they add standing to their work routine, they tend to be more productive and spend less time on social media and other unproductive tasks.
Research done on call center workers found a 23% increase in success rates for those who stood during calls compared to those who sat. Researchers also noted a difference in the worker’s comfort, attitude about work, and how they felt about themselves.
Researchers believe this may be because of the increased circulation to the brain, improving mental function. For this reason, many large corporations give standing desks to their employees, including Google and Facebook.
A study from Washington University shows that working while standing encourages creativity and collaboration. At least one reason why is that when people stand, they're more likely to move around and visit coworker's desks than when they're sitting.
Reduced Rates of Diabetes, Heart Disease, Cancer, and Early Mortality
There’s an abundance of research that tells us that sitting for extended periods significantly increases the rates for some of the leading causes of death for people in the United States. Significant research has been published that has found an association between a sedentary lifestyle and higher rates of heart disease (the number one cause of death), cancer (the second), and diabetes (seventh).
Diabetes
According to Dr. I-Min Lee, a professor at Harvard Medical School, extended periods of sitting reduces fat and sugar metabolism, increasing the risk of diabetes and heart disease. Research from Australia found that plasma levels and glucose levels were 20% higher in subjects who sat for 5 hours compared to those who took light activity breaks every 20 minutes. Another study found that sitting most of the time in a 24-hour period makes insulin 40% less effective in managing blood sugar. Sitting 6 hours/day for 2 weeks caused LDL and other fatty substances to rise, while the enzymes that break them down decrease.
Sitting time is also associated with larger waist circumference, BMI, systolic blood pressure, fasting triglycerides, and HDL cholesterol. According to Olivia Judson, an evolutionary biologist and research fellow in biology at Imperial College London, these are all "first steps on the road to diabetes."
Heart Disease
One of the first studies from 1953 found that bus conductors who stood all day had half the risk of heart disease-related deaths than drivers who sat all day. Subsequent studies have confirmed the original study. A comparison study that looked at 18 different studies found that a sedentary lifestyle has been associated with a 90% increase in heart disease-related deaths, and a 147% increase in the risk of cardiovascular events compared to those with non-sedentary lifestyles.
Another study found that after 3 hours of sitting, artery dilation decreases by 50%.
Constant sitting for 10 years or more is believed to increase the risk of heart disease by 64%.
Cancer
Women who sit for at least 6 hours/day have a higher risk of cancer than those who sit for 3 hours/day (especially for ovarian, breast, endometrial, and multiple myeloma). For men, rates of prostate cancer are lower for those who don’t sit at work and do 30 minutes/day of walking or bicycling. Sedentary behavior is associated with a higher risk of colon cancer for men and women. It’s estimated that higher activity could prevent 100,000 cases of breast and colon cancer every year in the U.S.
Sitting is considered an independent factor, which means that if you are healthy in other areas of your life, your sedentary habit could cause cancer, even if you don't smoke, you exercise regularly, and you live an otherwise healthy lifestyle.
Early Mortality
Researchers have estimated that if the U.S. population reduced sitting time by 3 hours/day, life expectancy would increase by 2 years.
Those who watched 4 hours of TV every day compared to those who watched less than 2 hours had a 50% increased risk of death from any cause, and a 125% increased risk of events associated with cardiovascular disease.
More Energy & Better Mood
An Australian study from 2012 found that sitting while using a computer was associated with more severe anxiety and depression.
A year-long study from the International Journal of Workplace Health Management involved 67 participants, half of whom sat, and the other half who had the option of standing. The study tracked participants in a real office. 61% of the participants who stood reported more energy, a positive outlook, as well as having less pain in muscles, joints, and back, and feeling stronger and more limber. And 65% reported greater productivity.
The “Take-a-Stand-Project” is a study from 2011 that followed a group of 24 office workers who used a standing desk for 4 weeks. At the end of the 4 weeks, participants were asked a series of questions. The results:
87% felt more comfortable
87% felt more energy
75% felt healthier
71% felt more focused
66% felt more productive
62% felt happier
33% felt less stress
After the standing desks were removed, the participants reported feeling worse.
Reduced Back Pain
One study from Minneapolis tested two groups for 7 weeks, one with a standing desk, the other without. The group that had a standing desk reduced their sitting time by 66 minutes/day and reduced their upper back and neck pain by 54%.
Experts believe that one of the reasons why using a standing desk can reduce back pain is due to increased circulation of oxygen and nutrients to the muscles, tendons, and ligaments in the back and neck. Many people who sit all day do so with poor posture, which can pinch nerves, reduce blood flow, and aggravate pressure points.
Using a standing desk doesn’t guarantee reduced back pain. The causes of back pain can be numerous and in many cases unknown. But experts recommend to create an ergonomic workspace, change your work position every 20 minutes (or at least take a quick walk), and keep moving (as there is no perfect posture, only the next posture). A study from the University of Cincinnati found a significant decrease in shoulder and back pain when study participants varied their postures.
Keep in mind that when we talk about using a standing desk, we aren’t talking about standing all day. Experts recommend switching throughout the day between standing and sitting. It’s possible to be mostly inactive as well while standing, which isn’t helpful. Standing for long periods may cause tightness in the legs and back. | March 2017 publication that a standing desk, “provides an opportunity to increase energy expenditure throughout the working day.” They explain further, "Though modest, accumulation of this small benefit over time could be an important part of the public health strategy to prevent weight gain in desk-bound workers. "
The numbers we just discussed are consistent with the experiences of many people who start using standing desks—they often lose a little weight, but not a significant amount. People have more energy when they stand compared to when they sit, and when they have more energy, they tend to be more active. Using a standing desk is just one part of an active lifestyle.
Another factor to consider is that when people stand at their desks, they often do more than just stand. It's common for people to shift, stretch, squat, and dance when they're standing at their desks. They also tend to leave their desks more often.
For those interested in losing weight while working, we recommend using an under-desk treadmill so you can walk while you work, or an under-desk bike. People who use one of these will rotate between walking, standing, and sitting throughout the day, often sitting for tasks that require deep concentration, and walking or standing for tasks that require less focus.
Productivity
Dan Fois published an experiment in New York Magazine where he stood for 30 days straight. The only time he wasn’t standing was when he slept or used the bathroom. One of the remarkable things that happened was a significant increase in his productivity. “I’ve cut my time-wasting drastically, editing and writing more than in any month I can remember.”
This isn’t an uncommon experience. Many people note that when they add standing to their work routine, they tend to be more productive and spend less time on social media and other unproductive tasks.
Research done on call center workers found a 23% increase in success rates for those who stood during calls compared to those who sat. Researchers also noted a difference in the worker’s comfort, attitude about work, and how they felt about themselves.
Researchers believe this may be because of the increased circulation to the brain, improving mental function. | yes |
Ergonomics | Can using a standing desk help you lose weight? | yes_statement | using a "standing" "desk" can "help" you "lose" "weight".. "standing" while working at a "desk" can aid in "weight" loss. | https://newsnetwork.mayoclinic.org/discussion/standing-several-hours-a-day-could-help-you-lose-weight-mayo-clinic-research-finds/ | Standing several hours a day could help you lose weight, Mayo ... | Standing several hours a day could help you lose weight, Mayo Clinic research finds
January 31, 2018
Share this:
ROCHESTER, Minn. – Standing instead of sitting for six hours a day could help people lose weight over the long term, according to a Mayo Clinic study published in the European Journal of Preventive Cardiology.
In recent years, sedentary behavior, such as sitting, has been blamed for contributing to the obesity epidemic, cardiovascular disease and diabetes, says Francisco Lopez-Jimenez, M.D., senior author and chair of preventive cardiology at Mayo Clinic. Population-based studies report that, in the U.S., adults sit more than seven hours a day. The range across European countries is 3.2 to 6.8 hours of daily sitting time.
The study examined whether standing burns more calories than sitting in adults in the first systematic review and meta-analysis (combining data from multiple studies) to evaluate the difference. The researchers analyzed 46 studies with 1,184 participants. Participants, on average, were 33 years old; 60 percent were men; and the average weight was 143.3 pounds.
“Overall, our study shows that, when you put all the available scientific evidence together, standing accounts for more calories burned than sitting,” says Farzane Saeidifard, M.D., first author and cardiology fellow at Mayo Clinic.
The researchers found that standing burned 0.15 calories (kcals) per minute more than sitting. By substituting standing for sitting for six hours a day, a 143.3-pound adult would expend an extra 54 calories (kcals) in six hours. Assuming no increase in food intake, that would equate to 5.5 pounds in one year and 22 pounds over four years.
“Standing for long periods of time for many adults may seem unmanageable, especially those who have desk jobs, but, for the person who sits for 12 hours a day, cutting sitting time to half would give great benefits,” Dr. Lopez-Jimenez says. The authors acknowledge that more research is needed to show if replacing standing with sitting is effective and whether there are long-term health implications of standing for long periods.
In recent years, moderate to vigorous physical activities in daily life have been encouraged in efforts to maintain and lose weight, and reduce the risk of heart disease, he says. But individuals cite barriers, such as time, motivation or access to facilities. Non Exercise Activity Thermogenesis, known as NEAT, a concept developed by James Levine, M.D., Ph.D. and Michael Jensen, M.D. ─ both Mayo Clinic endocrinologists and obesity researchers ─ focuses on the daily calories a person burns while doing normal daily activities, not exercising.
“Standing is one of components of NEAT, and the results of our study support this theory,” Dr. Lopez-Jimenez says. “The idea is to work into our daily routines some lower-impact activities that can improve our long-term health.”
Of note, the researchers found that calories burned between standing and sitting is about twice as high in men as in women. This likely reflects the effect of greater muscle mass in men on the amount of calories burned, because calories burned is proportional to the muscle mass activated while standing, researchers found.
Share this:
Related articles
Hypertension, or high blood pressure, occurs in more than 7% of pregnancies. A recent Mayo Clinic population-based study found that babies from pregnancies complicated by hypertension were more than ... | 1,184 participants. Participants, on average, were 33 years old; 60 percent were men; and the average weight was 143.3 pounds.
“Overall, our study shows that, when you put all the available scientific evidence together, standing accounts for more calories burned than sitting,” says Farzane Saeidifard, M.D., first author and cardiology fellow at Mayo Clinic.
The researchers found that standing burned 0.15 calories (kcals) per minute more than sitting. By substituting standing for sitting for six hours a day, a 143.3-pound adult would expend an extra 54 calories (kcals) in six hours. Assuming no increase in food intake, that would equate to 5.5 pounds in one year and 22 pounds over four years.
“Standing for long periods of time for many adults may seem unmanageable, especially those who have desk jobs, but, for the person who sits for 12 hours a day, cutting sitting time to half would give great benefits,” Dr. Lopez-Jimenez says. The authors acknowledge that more research is needed to show if replacing standing with sitting is effective and whether there are long-term health implications of standing for long periods.
In recent years, moderate to vigorous physical activities in daily life have been encouraged in efforts to maintain and lose weight, and reduce the risk of heart disease, he says. But individuals cite barriers, such as time, motivation or access to facilities. Non Exercise Activity Thermogenesis, known as NEAT, a concept developed by James Levine, M.D., Ph.D. and Michael Jensen, M.D. ─ both Mayo Clinic endocrinologists and obesity researchers ─ focuses on the daily calories a person burns while doing normal daily activities, not exercising.
“Standing is one of components of NEAT, and the results of our study support this theory,” Dr. Lopez-Jimenez says. | yes |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.